Code coverage helps us understand how much of our code is tested. Why is this important?
Think about what coverage tells you about your tests and code.
Code coverage shows which parts of the code are run during tests. This helps identify untested code, so you can add tests to cover those parts. It does not guarantee bug fixes or test speed.
Given this simple Python function and tests, what coverage percentage will pytest-cov report?
def add(a, b): if a > 0: return a + b else: return b def test_add_positive(): assert add(2, 3) == 5 # Note: no test for add with a <= 0
Think about which lines run when add(2, 3) is called.
The test calls add with a=2, so the 'if a > 0' branch runs, but the 'else' branch does not. So some lines are not covered, resulting in about 75% coverage.
You have coverage data showing 85% line coverage. Which assertion best verifies your tests cover at least 80% of lines?
Think about verifying minimum coverage threshold.
To check tests cover at least 80%, assert coverage_percent >= 80. Other options check wrong thresholds or exact values.
Given this code and test, why does coverage show 0% for the function?
def multiply(x, y): return x * y def test_multiply(): result = multiply(3, 4) assert result == 12 # Coverage run command: pytest --cov=module_name # But coverage shows 0% for multiply function
Check if coverage is pointed to the right code file.
If coverage is run with a wrong or missing module name, it won't measure the function's code, showing 0% coverage even if tests run.
Which pytest-cov option configuration will cause tests to fail if coverage is under 90%?
Look for the official pytest-cov option to fail on low coverage.
The correct option is --cov-fail-under=90. Other options are invalid and cause errors.