What if your test failures could tell you which ones really need your attention?
Why @pytest.mark.xfail for expected failures? - Purpose & Use Cases
Imagine you have a test that checks a feature still under development or a known bug that is not fixed yet.
You run all tests manually and see this test fail every time, mixing it with unexpected failures.
Manually tracking which failures are expected wastes time and causes confusion.
You might spend hours investigating failures that you already know about.
This slows down your work and increases frustration.
The @pytest.mark.xfail tag tells pytest that a test is expected to fail.
Pytest then marks these failures differently, so you can focus on new, unexpected problems.
This saves time and keeps your test reports clear.
def test_feature(): assert feature() == expected # fails but no special mark
@pytest.mark.xfail
def test_feature():
assert feature() == expectedYou can easily separate known issues from new bugs, making your testing smarter and faster.
When a new feature is half-built, you mark its test with @pytest.mark.xfail so your test suite stays green while you finish the work.
Manually tracking expected failures wastes time and causes confusion.
@pytest.mark.xfail clearly marks tests expected to fail.
This helps focus on real new problems and speeds up testing.