0
0
Testing Fundamentalstesting~15 mins

Why CI/CD integrates testing into delivery in Testing Fundamentals - Automation Benefits in Action

Choose your learning style9 modes available
Verify automated tests run during CI/CD pipeline
Preconditions (3)
Step 1: Push a code change to the repository branch that triggers the CI/CD pipeline
Step 2: Observe the CI/CD pipeline execution start
Step 3: Verify that the automated tests are executed as part of the pipeline
Step 4: Check the test results in the pipeline logs or reports
✅ Expected Result: The CI/CD pipeline runs the automated tests automatically after code changes, and the test results are visible in the pipeline report indicating pass or fail status.
Automation Requirements - pytest with pytest-cov for coverage
Assertions Needed:
Verify test functions run without errors
Assert expected outputs or states in test cases
Confirm test coverage reports are generated
Best Practices:
Use fixtures for setup and teardown
Write small, independent test functions
Use descriptive test names
Integrate tests into CI/CD pipeline configuration
Use explicit assertions for clarity
Automated Solution
Testing Fundamentals
import pytest

# Sample function to test
def add(a, b):
    return a + b

# Test function for add
def test_add_positive_numbers():
    assert add(2, 3) == 5

def test_add_negative_numbers():
    assert add(-1, -1) == -2

def test_add_zero():
    assert add(0, 0) == 0

if __name__ == '__main__':
    pytest.main(['-v', '--cov=.', '--cov-report=term-missing'])

This script defines a simple function add and three test functions to verify its behavior with positive numbers, negative numbers, and zero.

We use pytest as the test framework because it is easy to write and read tests. The assert statements check that the function returns expected results.

The pytest.main() call runs the tests with verbose output and coverage reporting, which helps verify that tests are executed and code coverage is measured.

In a CI/CD pipeline, this script would be triggered automatically after code changes, ensuring tests run every time code is delivered.

Common Mistakes - 4 Pitfalls
Not using explicit assertions in test functions
Writing large test functions that test multiple things
{'mistake': 'Not integrating tests into the CI/CD pipeline configuration', 'why_bad': "Tests won't run automatically on code changes, defeating the purpose of CI/CD testing.", 'correct_approach': 'Configure the CI/CD pipeline to run tests on every code push or pull request.'}
Hardcoding test data inside tests without flexibility
Bonus Challenge

Now add data-driven testing with 3 different input pairs for the add function

Show Hint