In risk-based testing, which factor is most important when deciding which features to test first?
Think about what risk means in testing: what could go wrong and how bad it would be.
Risk-based testing focuses on testing features that are most likely to fail and would cause the biggest problems if they do. This helps use testing time wisely.
What is the output of this Python code that calculates risk scores for features?
features = {'Login': {'likelihood': 0.9, 'impact': 0.8}, 'Search': {'likelihood': 0.4, 'impact': 0.5}}
risk_scores = {f: data['likelihood'] * data['impact'] for f, data in features.items()}
print(risk_scores)Multiply likelihood by impact for each feature.
Risk score = likelihood * impact. For Login: 0.9 * 0.8 = 0.72. For Search: 0.4 * 0.5 = 0.2.
Given a list of features with risk scores, which assertion correctly checks that the features are sorted from highest to lowest risk?
features = [('Login', 0.72), ('Search', 0.2), ('Profile', 0.5)] sorted_features = sorted(features, key=lambda x: x[1], reverse=True)
Check that the list is sorted descending by risk score.
The sorted list should start with the highest risk score (0.72 for Login), then 0.5 for Profile, then 0.2 for Search.
Why does this code raise a TypeError when calculating risk scores?
features = {'Login': {'likelihood': '0.9', 'impact': 0.8}}
risk_scores = {f: data['likelihood'] * data['impact'] for f, data in features.items()}
print(risk_scores)Check the data types of values used in multiplication.
Multiplying a string ('0.9') by a float (0.8) causes a TypeError in Python.
You have 5 features with risk scores: A(0.9), B(0.7), C(0.4), D(0.3), E(0.1). You can only write tests for 3 features due to time limits. Which selection best follows risk-based testing principles?
Focus on features with the highest risk scores first.
Risk-based testing prioritizes testing features with the highest risk to catch critical issues early.