0
0
Testing Fundamentalstesting~15 mins

Performance test reporting in Testing Fundamentals - Build an Automation Script

Choose your learning style9 modes available
Generate and verify performance test report after load test
Preconditions (3)
Step 1: Run the load test script with 100 virtual users for 5 minutes
Step 2: Wait for the test to complete
Step 3: Generate the performance test report in HTML format
Step 4: Open the generated report
Step 5: Verify the report contains average response time, throughput, error rate, and test duration
Step 6: Check that the report summary shows no errors above 1%
✅ Expected Result: Performance test report is generated successfully and contains all key metrics with error rate below 1%
Automation Requirements - Python with Locust and pytest
Assertions Needed:
Report file exists after test run
Report contains average response time metric
Report contains throughput metric
Report contains error rate metric
Error rate is below 1%
Best Practices:
Use explicit waits for test completion
Use file existence checks for report generation
Parse report content to verify metrics
Use pytest assertions for validation
Keep test modular and readable
Automated Solution
Testing Fundamentals
import os
import time
import pytest

REPORT_PATH = 'performance_report.html'

# Simulate running load test and generating report
# In real scenario, this would trigger Locust or other tool

def run_load_test():
    # Simulate test duration
    time.sleep(5)  # wait 5 seconds to simulate test
    # Simulate report generation
    with open(REPORT_PATH, 'w') as f:
        f.write('<html><body>')
        f.write('<h1>Performance Test Report</h1>')
        f.write('<p>Average Response Time: 200ms</p>')
        f.write('<p>Throughput: 50 requests/sec</p>')
        f.write('<p>Error Rate: 0.5%</p>')
        f.write('<p>Test Duration: 5 minutes</p>')
        f.write('</body></html>')


def test_performance_report_generation():
    # Run the load test simulation
    run_load_test()

    # Wait for report file to exist (explicit wait)
    timeout = 10
    poll_interval = 1
    waited = 0
    while not os.path.exists(REPORT_PATH) and waited < timeout:
        time.sleep(poll_interval)
        waited += poll_interval

    # Assert report file exists
    assert os.path.exists(REPORT_PATH), 'Report file was not generated'

    # Read report content
    with open(REPORT_PATH, 'r') as f:
        content = f.read()

    # Assert key metrics present
    assert 'Average Response Time' in content, 'Average Response Time metric missing'
    assert 'Throughput' in content, 'Throughput metric missing'
    assert 'Error Rate' in content, 'Error Rate metric missing'
    assert 'Test Duration' in content, 'Test Duration metric missing'

    # Extract error rate value
    import re
    match = re.search(r'Error Rate: ([0-9.]+)%', content)
    assert match is not None, 'Error Rate value not found'
    error_rate = float(match.group(1))

    # Assert error rate below 1%
    assert error_rate < 1.0, f'Error rate too high: {error_rate}%'

    # Cleanup
    os.remove(REPORT_PATH)

This test script simulates running a performance load test and generating an HTML report.

The run_load_test() function waits 5 seconds to mimic test duration and creates a simple HTML report file with key metrics.

The test test_performance_report_generation() waits explicitly for the report file to appear, then reads its content.

It asserts the presence of average response time, throughput, error rate, and test duration in the report.

Using a regular expression, it extracts the error rate value and asserts it is below 1%.

Finally, it cleans up by deleting the report file.

This approach uses explicit waits, file checks, content parsing, and pytest assertions to verify performance test reporting.

Common Mistakes - 3 Pitfalls
{'mistake': 'Not waiting for the report file to be generated before reading it', 'why_bad': 'The test may fail because the report file does not exist yet, causing false negatives', 'correct_approach': "Use explicit waits or polling to check for the report file's existence before reading"}
Hardcoding metric values in assertions without parsing the report
Ignoring cleanup of generated report files
Bonus Challenge

Now add data-driven testing with 3 different load levels: 50, 100, and 200 virtual users

Show Hint