A/B testing ad variations in Digital Marketing - Time & Space Complexity
When running A/B tests on ad variations, it's important to understand how the time to analyze results grows as you increase the number of ads or users.
We want to know how the effort changes when testing more ads or more audience members.
Analyze the time complexity of the following process for A/B testing ad variations.
// For each ad variation
for each ad in ad_variations:
// Show ad to each user in test group
for each user in test_group:
record user response to ad
// After collecting data, analyze results
analyze all recorded responses
This code shows how each ad variation is tested by every user, and then results are analyzed.
Look at what repeats in this process.
- Primary operation: Showing each ad to every user and recording their response.
- How many times: For each ad variation, the process repeats for every user in the test group.
As you add more ads or more users, the total work grows.
| Input Size (ads x users) | Approx. Operations |
|---|---|
| 10 ads x 10 users | 100 |
| 100 ads x 100 users | 10,000 |
| 1000 ads x 1000 users | 1,000,000 |
Pattern observation: Doubling the number of ads and users multiplies the work by four, showing a fast growth in effort.
Time Complexity: O(n x m)
This means the time grows proportionally to the number of ads times the number of users tested.
[X] Wrong: "Testing more ads only increases time a little because users can see ads quickly."
[OK] Correct: Each new ad must be shown to every user, so the total time grows with both ads and users, not just one.
Understanding how testing scales helps you plan campaigns and explain your approach clearly in discussions.
"What if we only test each ad with a random sample of users instead of all users? How would the time complexity change?"