0
0
Testing Fundamentalstesting~15 mins

Test execution reporting in Testing Fundamentals - Build an Automation Script

Choose your learning style9 modes available
Verify test execution report generation after running test cases
Preconditions (2)
Step 1: Run the automated test suite using the test runner command
Step 2: Wait for the test execution to complete
Step 3: Locate the generated test execution report file
Step 4: Open the report file
Step 5: Verify the report contains the test case names
Step 6: Verify the report shows the pass/fail status for each test case
Step 7: Verify the report includes the total number of tests run
Step 8: Verify the report includes the number of passed and failed tests
✅ Expected Result: A test execution report file is generated that lists all test cases run, their pass/fail status, and summary statistics of total, passed, and failed tests.
Automation Requirements - pytest
Assertions Needed:
Report file exists after test run
Report file contains test case names
Report file contains pass/fail status for each test
Report file contains summary of total, passed, and failed tests
Best Practices:
Use pytest built-in reporting options
Use assertions to verify report content
Use setup and teardown hooks if needed
Keep test code clean and readable
Automated Solution
Testing Fundamentals
import os
import pytest

REPORT_PATH = 'test_report.txt'

@pytest.fixture(scope='session', autouse=True)
def generate_report(request):
    # Run tests and generate report after all tests
    yield
    # Simulate report generation for demonstration
    with open(REPORT_PATH, 'w') as f:
        f.write('Test Case: test_example_1 - PASS\n')
        f.write('Test Case: test_example_2 - FAIL\n')
        f.write('Total tests: 2\n')
        f.write('Passed: 1\n')
        f.write('Failed: 1\n')


def test_example_1():
    assert 1 + 1 == 2

def test_example_2():
    assert 1 + 1 == 3


def test_report_file_exists():
    assert os.path.exists(REPORT_PATH), f'Report file {REPORT_PATH} should exist'


def test_report_contains_test_names():
    with open(REPORT_PATH) as f:
        content = f.read()
    assert 'test_example_1' in content
    assert 'test_example_2' in content


def test_report_contains_pass_fail_status():
    with open(REPORT_PATH) as f:
        content = f.read()
    assert 'PASS' in content
    assert 'FAIL' in content


def test_report_contains_summary():
    with open(REPORT_PATH) as f:
        content = f.read()
    assert 'Total tests: 2' in content
    assert 'Passed: 1' in content
    assert 'Failed: 1' in content

This code uses pytest to run two simple test cases: one that passes and one that fails.

The generate_report fixture runs once per test session and after all tests finish, it creates a simple text report file test_report.txt with test names, pass/fail status, and summary counts.

Separate test functions then check that the report file exists and contains the expected information. This simulates verifying a test execution report.

Using fixtures and assertions keeps the code organized and clear. The report generation is simplified here for demonstration but shows the key idea of verifying test execution reporting.

Common Mistakes - 4 Pitfalls
Not waiting for tests to finish before checking the report
Hardcoding report file paths without flexibility
Not asserting the presence of all required report details
Generating report manually inside test cases
Bonus Challenge

Now add data-driven testing with 3 different inputs and verify the report includes all test runs

Show Hint