0
0
Testing Fundamentalstesting~15 mins

Test reporting in pipelines in Testing Fundamentals - Build an Automation Script

Choose your learning style9 modes available
Verify test report generation in CI pipeline
Preconditions (3)
Step 1: Trigger the CI pipeline to run automated tests
Step 2: Wait for the tests to complete
Step 3: Locate the generated test report file in the pipeline artifacts
Step 4: Open the test report file
Step 5: Verify the report contains the test suite name
Step 6: Verify the report lists all executed test cases
Step 7: Verify the report shows the pass/fail status for each test case
Step 8: Verify the report includes summary statistics (total tests, passed, failed)
✅ Expected Result: The test report file is generated and contains accurate details of all executed tests with correct pass/fail status and summary statistics.
Automation Requirements - Python unittest with xmlrunner
Assertions Needed:
Test report file exists after test run
Test report XML contains test suite name
Test report XML lists all test cases executed
Test report XML shows correct pass/fail status for each test
Test report XML includes summary with total, passed, and failed counts
Best Practices:
Use explicit waits or polling to ensure test report file is generated before reading
Parse XML report using standard XML libraries for validation
Keep test names descriptive for clear reporting
Integrate test report generation as part of the test run command
Store test reports as pipeline artifacts for later inspection
Automated Solution
Testing Fundamentals
import unittest
import xml.etree.ElementTree as ET
import os
import time

import xmlrunner

class SampleTests(unittest.TestCase):
    def test_pass(self):
        self.assertTrue(True)

    def test_fail(self):
        self.assertTrue(False)


def run_tests_and_generate_report(report_path):
    with open(report_path, 'wb') as output:
        unittest.main(testRunner=xmlrunner.XMLTestRunner(output=output), exit=False)


def wait_for_file(file_path, timeout=10):
    start = time.time()
    while time.time() - start < timeout:
        if os.path.exists(file_path):
            return True
        time.sleep(0.5)
    return False


def validate_report(report_path):
    assert os.path.exists(report_path), f"Report file {report_path} does not exist"

    tree = ET.parse(report_path)
    root = tree.getroot()

    # Check test suite name
    suite_name = root.attrib.get('name')
    assert suite_name == 'SampleTests', f"Expected suite name 'SampleTests', got '{suite_name}'"

    # Collect test cases
    testcases = root.findall('testcase')
    testcase_names = {tc.attrib['name'] for tc in testcases}
    expected_tests = {'test_pass', 'test_fail'}
    assert testcase_names == expected_tests, f"Test cases mismatch. Expected {expected_tests}, got {testcase_names}"

    # Check pass/fail status
    failures = root.findall('testcase/failure')
    failed_tests = {f.getparent().attrib['name'] for f in failures} if failures else set()
    assert 'test_fail' in failed_tests, "Failed test 'test_fail' not found in report"
    assert 'test_pass' not in failed_tests, "Passed test 'test_pass' incorrectly marked as failed"

    # Check summary counts
    tests = int(root.attrib.get('tests', 0))
    failures_count = int(root.attrib.get('failures', 0))
    assert tests == 2, f"Expected 2 tests, got {tests}"
    assert failures_count == 1, f"Expected 1 failure, got {failures_count}"


if __name__ == '__main__':
    report_file = 'test-reports/sample-tests.xml'
    os.makedirs(os.path.dirname(report_file), exist_ok=True)

    run_tests_and_generate_report(report_file)

    if wait_for_file(report_file):
        validate_report(report_file)
        print('Test report validation passed.')
    else:
        raise FileNotFoundError(f'Test report {report_file} was not generated in time.')

This script defines two simple tests: one that passes and one that fails.

The run_tests_and_generate_report function runs these tests and generates an XML report using xmlrunner.

The wait_for_file function waits up to 10 seconds for the report file to appear, simulating waiting for pipeline artifact generation.

The validate_report function parses the XML report and checks:

  • The test suite name matches the test class name.
  • All expected test cases are listed.
  • The pass/fail status is correct for each test.
  • The summary counts for total tests and failures are accurate.

This approach ensures the test report generated in a pipeline is complete and accurate.

Common Mistakes - 4 Pitfalls
Not waiting for the test report file before reading it
Hardcoding file paths without creating directories
Parsing test report as plain text instead of XML
Not verifying summary statistics in the report
Bonus Challenge

Now add data-driven testing with 3 different inputs and verify the test report reflects all runs.

Show Hint