Churn prediction and prevention in Digital Marketing - Time & Space Complexity
When predicting and preventing customer churn, we want to know how the time to analyze data grows as we get more customers.
We ask: How does the work needed change when the number of customers increases?
Analyze the time complexity of the following churn prediction process.
for each customer in customer_list:
features = extract_features(customer)
prediction = model.predict(features)
if prediction == 'likely_to_churn':
send_prevention_offer(customer)
This code checks each customer, predicts if they might leave, and sends an offer to keep them if needed.
Look for repeated steps that take most time.
- Primary operation: Looping through each customer to predict churn.
- How many times: Once for every customer in the list.
As the number of customers grows, the work grows in a similar way.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 predictions and checks |
| 100 | About 100 predictions and checks |
| 1000 | About 1000 predictions and checks |
Pattern observation: The work increases directly with the number of customers.
Time Complexity: O(n)
This means if you double the customers, the time to predict churn roughly doubles too.
[X] Wrong: "Predicting churn for many customers takes the same time no matter how many customers there are."
[OK] Correct: Each customer needs a separate prediction, so more customers mean more work.
Understanding how prediction time grows helps you explain how your solution handles more customers smoothly and efficiently.
"What if the model prediction step itself took longer as more customers are processed? How would that affect the overall time complexity?"