0
0
Testing Fundamentalstesting~15 mins

Test case components (steps, expected, actual) in Testing Fundamentals - Deep Dive

Choose your learning style9 modes available
Overview - Test case components (steps, expected, actual)
What is it?
A test case is a detailed instruction set that guides testers on how to check if a software feature works correctly. It includes steps to follow, what result to expect, and the actual result after testing. These components help testers confirm if the software behaves as intended or if there are problems.
Why it matters
Without clear test case components, testers might miss important checks or misunderstand what to look for, leading to bugs slipping into the final product. Good test cases make testing consistent, repeatable, and easier to communicate, which saves time and improves software quality.
Where it fits
Before learning test case components, you should understand basic software testing concepts like what testing is and why it’s done. After this, you can learn how to write full test cases, manage test suites, and automate tests.
Mental Model
Core Idea
A test case is like a recipe that tells you exactly what to do, what to expect, and what actually happened when testing software.
Think of it like...
Imagine baking a cake: the steps are the recipe instructions, the expected result is the perfect cake you want, and the actual result is the cake you baked. If the cake looks and tastes like expected, the recipe worked well.
┌───────────────┐
│ Test Case     │
├───────────────┤
│ 1. Steps      │ → Actions to perform
│ 2. Expected   │ → What should happen
│ 3. Actual     │ → What really happened
└───────────────┘
Build-Up - 6 Steps
1
FoundationUnderstanding test case steps
🤔
Concept: Test case steps are the clear, ordered actions a tester follows to check a feature.
Steps tell you exactly what to do, like clicking buttons or entering text. Each step should be simple and easy to follow so anyone can repeat the test the same way.
Result
You know exactly how to perform the test without guessing.
Knowing how to write clear steps ensures tests are repeatable and reliable, which is key for finding bugs consistently.
2
FoundationDefining expected results
🤔
Concept: Expected results describe what should happen after each step or at the end of the test.
This could be a message on screen, a change in data, or a new page opening. Expected results set the goal for the test to pass.
Result
You have a clear target to compare against when testing.
Clear expected results prevent confusion about whether the software works or not.
3
IntermediateRecording actual results
🤔Before reading on: do you think actual results should be written before or after running the test? Commit to your answer.
Concept: Actual results are what really happens when you perform the test steps.
After running the test, you note what you observe. If it matches the expected result, the test passes; if not, it fails.
Result
You can tell if the software behaves correctly or has bugs.
Capturing actual results accurately is essential to identify problems and communicate them clearly.
4
IntermediateLinking steps with expected and actual
🤔Before reading on: do you think expected and actual results should be linked to each step or only at the end? Commit to your answer.
Concept: Each step can have its own expected and actual result to pinpoint exactly where issues occur.
For example, after step 3, you expect a message; if it doesn’t appear, you record that actual result there. This helps find the exact failing point.
Result
You can quickly identify which step caused a problem.
Connecting results to steps improves debugging speed and test clarity.
5
AdvancedHandling ambiguous or dynamic results
🤔Before reading on: do you think expected results can be flexible or must always be exact? Commit to your answer.
Concept: Sometimes expected results vary, like timestamps or random data, so testers must define acceptable ranges or patterns.
For example, instead of expecting 'Time: 10:00 AM', you expect 'Time displayed in HH:MM format'. This avoids false failures.
Result
Tests become more robust and less flaky.
Understanding how to handle dynamic results prevents wasting time on false alarms.
6
ExpertUsing actual results for continuous improvement
🤔Before reading on: do you think actual results are only for pass/fail or can they improve tests? Commit to your answer.
Concept: Actual results can reveal patterns of failure or unclear steps, guiding test case refinement.
By analyzing actual results over time, testers improve steps and expected results, making tests clearer and more effective.
Result
Test cases evolve to catch more bugs and reduce confusion.
Leveraging actual results as feedback turns testing into a learning process, increasing software quality.
Under the Hood
Test cases work by defining a controlled experiment: steps are inputs, expected results are hypotheses, and actual results are observations. This structure allows testers to systematically verify software behavior and detect deviations.
Why designed this way?
This format was created to make testing repeatable and objective, avoiding guesswork. Early software projects suffered from inconsistent testing, so clear components were introduced to standardize the process and improve communication.
┌───────────────┐      ┌───────────────┐      ┌───────────────┐
│   Steps       │─────▶│ Expected      │─────▶│ Actual        │
│ (Actions)     │      │ Results       │      │ Results       │
└───────────────┘      └───────────────┘      └───────────────┘
       │                      │                      │
       ▼                      ▼                      ▼
  Tester performs        Tester knows          Tester records
  actions in order       what should happen    what really happened
Myth Busters - 4 Common Misconceptions
Quick: Do you think actual results must always match expected results exactly? Commit to yes or no before reading on.
Common Belief:Actual results must be exactly the same as expected results for a test to pass.
Tap to reveal reality
Reality:Actual results can differ slightly if the difference is acceptable, like formatting or timing, and still be considered a pass.
Why it matters:Strict matching can cause false failures, wasting time investigating non-issues.
Quick: Do you think test case steps can be vague as long as expected results are clear? Commit to yes or no before reading on.
Common Belief:As long as expected results are clear, test steps can be brief or vague.
Tap to reveal reality
Reality:Vague steps cause testers to perform tests differently, leading to inconsistent results and missed bugs.
Why it matters:Clear steps ensure everyone tests the same way, making results reliable.
Quick: Do you think actual results are optional if the test passes? Commit to yes or no before reading on.
Common Belief:If a test passes, recording actual results is not necessary.
Tap to reveal reality
Reality:Recording actual results even on pass helps track behavior over time and supports audits or reviews.
Why it matters:Skipping actual results reduces traceability and can hide intermittent issues.
Quick: Do you think expected results should always be written after running the test? Commit to yes or no before reading on.
Common Belief:Expected results can be decided after running the test based on what happens.
Tap to reveal reality
Reality:Expected results must be defined before testing to objectively judge pass or fail.
Why it matters:Defining expected results after testing biases the outcome and defeats the purpose of testing.
Expert Zone
1
Test case steps should be atomic and independent to allow reordering or reuse in different tests.
2
Expected results can include not only visible outputs but also backend changes like database updates or logs.
3
Actual results should capture environment details (browser, OS) to help reproduce issues.
When NOT to use
Test cases with detailed steps and expected results are less useful for exploratory testing, where testers investigate without scripts. In such cases, session-based or charter-based testing is better.
Production Patterns
In real projects, test cases are stored in test management tools with traceability links to requirements and defects. Automated tests often generate actual results logs that feed into dashboards for quick quality assessment.
Connections
Scientific Method
Test cases follow the same pattern of hypothesis (expected result), experiment (steps), and observation (actual result).
Understanding test cases as experiments helps appreciate the need for clear expectations and unbiased observations.
Recipe Writing
Both require clear, step-by-step instructions and expected outcomes to produce consistent results.
Knowing how recipes work clarifies why test steps must be precise and expected results clear.
Quality Control in Manufacturing
Test cases are like quality checks on products, ensuring each item meets standards before shipping.
Seeing test cases as quality gates highlights their role in preventing defects from reaching users.
Common Pitfalls
#1Writing vague or incomplete test steps
Wrong approach:Step 1: Check login. Step 2: See if it works.
Correct approach:Step 1: Enter username 'user1' in the username field. Step 2: Enter password 'pass123' in the password field. Step 3: Click the 'Login' button.
Root cause:Assuming testers will fill in missing details leads to inconsistent testing and missed bugs.
#2Not defining expected results before testing
Wrong approach:Run the test steps first, then decide what the result should be.
Correct approach:Before testing, write: 'After clicking Login, the dashboard page should appear with welcome message.'
Root cause:Confusing cause and effect causes biased test outcomes and unreliable results.
#3Skipping actual results when tests pass
Wrong approach:Test passed, so no need to record actual results.
Correct approach:Test passed. Actual result: Dashboard page loaded with welcome message as expected.
Root cause:Thinking only failures matter reduces traceability and hides intermittent issues.
Key Takeaways
Test case components—steps, expected results, and actual results—work together to make testing clear and repeatable.
Clear, detailed steps ensure anyone can perform the test the same way, avoiding confusion.
Expected results set the goal before testing, so pass or fail decisions are objective.
Recording actual results after testing shows what really happened and helps find bugs.
Good test cases improve software quality by making testing consistent, communicable, and reliable.