Which of the following best describes a flaky test in software testing?
Think about tests that behave unpredictably without changes in the code.
A flaky test is one that produces different results (pass or fail) when run multiple times without any changes in the code or environment. This unpredictability makes it hard to trust test results.
Consider the following Python test simulation code that randomly fails. What is the output after running it once?
import random def flaky_test(): if random.choice([True, False]): return 'Test Passed' else: return 'Test Failed' result = flaky_test() print(result)
The output depends on random choice; consider one possible output.
The code randomly returns either 'Test Passed' or 'Test Failed'. Since the question asks for output after one run, both B and D are possible outputs.
Which assertion approach best helps detect flaky tests by running the same test multiple times and checking for consistent results?
Think about how to confirm a test is stable across runs.
To detect flaky tests, running the test multiple times and asserting that all results are consistent (all pass or all fail) helps identify instability. Option A ensures consistent results, which means the test is not flaky.
Given this test code snippet, which change will most likely fix the flaky behavior caused by timing issues?
def test_load_data():
data = load_data()
assert len(data) > 0
# load_data fetches data asynchronously and may not be ready immediatelyThink about waiting for the data to be ready instead of guessing a fixed delay.
Using a retry loop to wait until the data is ready before asserting avoids flaky failures caused by timing. Fixed delays (Option B) can be unreliable and cause longer test times. Removing assertions or running once does not solve flakiness.
In a continuous integration (CI) pipeline, what is the best practice to handle flaky tests to maintain reliable test reports?
Consider how to balance test reliability and visibility in automated pipelines.
Automatically retrying flaky tests a few times helps reduce false failures in CI reports while still keeping tests visible. Ignoring or removing flaky tests hides problems and reduces test coverage. Running only on developer machines reduces CI reliability.