Imagine a software module has 50 defects found during testing. The module size is 10,000 lines of code. What is the defect density?
Defect density = Number of defects / Size of code.
Defect density is calculated by dividing the number of defects by the size of the code. Here, 50 defects / 10,000 lines = 0.005 defects per line.
A testing team found 80 defects in the first week and 40 defects in the second week. If the total defects found after two weeks is 120, what is the defect detection rate for the second week?
Defect detection rate = (Defects found in period / Total defects found) * 100%
Second week detection rate = (40 / 120) * 100% = 33.3%
What is the output of this Python code?
def calculate_defect_density(defects, lines_of_code): return defects / lines_of_code result = calculate_defect_density(25, 5000) print(f"Defect Density: {result:.4f}")
Divide defects by lines of code and format to 4 decimals.
25 defects / 5000 lines = 0.005. Formatted to 4 decimals: 0.0050.
Which assertion correctly verifies that the defect detection rate is 40% when 20 defects are found out of 50 total defects?
defect_found = 20 total_defects = 50 detection_rate = (defect_found / total_defects) * 100
Remember detection_rate is a percentage, not a fraction.
The detection_rate is (20/50)*100 = 40. So the assertion must check for 40, not 0.4.
What error will this code raise when calculating defect density?
defect_count = '30' lines_of_code = 10000 defect_density = defect_count / lines_of_code print(defect_density)
defect_count = '30' lines_of_code = 10000 defect_density = defect_count / lines_of_code print(defect_density)
Check the data types of variables used in division.
defect_count is a string, lines_of_code is an integer. Dividing string by int causes TypeError.