0
0
Testing Fundamentalstesting~15 mins

Testing in the software development lifecycle in Testing Fundamentals - Build an Automation Script

Choose your learning style9 modes available
Verify testing activities in each phase of the software development lifecycle
Preconditions (2)
Step 1: Review the requirements document and identify testable requirements
Step 2: Check that test cases are created based on requirements
Step 3: Verify design documents include testability considerations
Step 4: Confirm unit tests are written and executed during implementation
Step 5: Execute system and integration tests after implementation
Step 6: Validate that defects found during testing are logged and tracked
Step 7: Ensure acceptance testing is performed before deployment
Step 8: Confirm that testing results are documented and reviewed
✅ Expected Result: Testing activities are properly planned and executed in each phase of the software development lifecycle, with test cases linked to requirements, tests executed at appropriate stages, defects tracked, and results documented.
Automation Requirements - Python unittest
Assertions Needed:
Test cases exist for each requirement
Unit tests pass during implementation
Integration and system tests pass
Defects are logged when tests fail
Acceptance tests pass before deployment
Best Practices:
Use clear and descriptive test method names
Organize tests by lifecycle phase
Use setup and teardown methods for test environment preparation
Log test results and failures clearly
Keep tests independent and repeatable
Automated Solution
Testing Fundamentals
import unittest

class TestSDLCPhases(unittest.TestCase):
    def setUp(self):
        # Setup test environment, e.g., load requirements and test cases
        self.requirements = ['Login', 'Search', 'Checkout']
        self.test_cases = {
            'Login': ['test_login_valid', 'test_login_invalid'],
            'Search': ['test_search_results'],
            'Checkout': ['test_checkout_process']
        }
        self.unit_tests_passed = True
        self.integration_tests_passed = True
        self.system_tests_passed = True
        self.defects_logged = []
        self.acceptance_tests_passed = True

    def test_test_cases_exist_for_requirements(self):
        for req in self.requirements:
            self.assertIn(req, self.test_cases, f"No test cases for requirement: {req}")
            self.assertGreater(len(self.test_cases[req]), 0, f"No test cases listed for {req}")

    def test_unit_tests_pass(self):
        self.assertTrue(self.unit_tests_passed, "Unit tests did not pass")

    def test_integration_and_system_tests_pass(self):
        self.assertTrue(self.integration_tests_passed, "Integration tests did not pass")
        self.assertTrue(self.system_tests_passed, "System tests did not pass")

    def test_defects_logged_on_failures(self):
        # Simulate a failure and defect logging
        if not self.unit_tests_passed:
            self.defects_logged.append('Unit test failure defect')
        self.assertTrue(len(self.defects_logged) >= 0, "Defects should be logged when tests fail")

    def test_acceptance_tests_pass_before_deployment(self):
        self.assertTrue(self.acceptance_tests_passed, "Acceptance tests did not pass before deployment")

    def tearDown(self):
        # Clean up test environment
        pass

if __name__ == '__main__':
    unittest.main()

This test class simulates verifying testing activities in the software development lifecycle.

setUp() prepares the test environment with sample requirements and test cases, and flags for test results.

test_test_cases_exist_for_requirements() checks that each requirement has at least one test case.

test_unit_tests_pass() asserts that unit tests passed during implementation.

test_integration_and_system_tests_pass() asserts integration and system tests passed after implementation.

test_defects_logged_on_failures() simulates defect logging if tests fail and asserts defects are logged.

test_acceptance_tests_pass_before_deployment() asserts acceptance tests passed before deployment.

Each test uses clear assertions to verify expected outcomes, following best practices for test organization and clarity.

Common Mistakes - 5 Pitfalls
{'mistake': 'Not linking test cases to specific requirements', 'why_bad': "Without linking, it's unclear if all requirements are tested, risking missed defects.", 'correct_approach': 'Create and verify test cases explicitly mapped to each requirement.'}
Skipping unit tests during implementation
Not logging defects when tests fail
Running acceptance tests after deployment
Writing dependent or non-repeatable tests
Bonus Challenge

Now add data-driven testing with 3 different sets of requirements and test cases

Show Hint