0
0
ML Pythonml~8 mins

User-based vs item-based in ML Python - Metrics Comparison

Choose your learning style9 modes available
Metrics & Evaluation - User-based vs item-based
Which metric matters for User-based vs item-based and WHY

In recommendation systems, the key metrics are Precision, Recall, and F1-score. These tell us how well the system suggests items users actually like.

Precision shows how many recommended items are truly relevant. Recall shows how many relevant items were found out of all possible relevant items. F1-score balances both.

We use these because recommendations must be both accurate (precision) and complete (recall) to keep users happy.

Confusion matrix for recommendation
      |---------------------------|
      |           | Relevant      | Not Relevant |
      |-----------|--------------|-------------|
      | Recommended | True Positive (TP) | False Positive (FP) |
      | Not Recommended | False Negative (FN) | True Negative (TN) |
      |---------------------------|
    

For example, if a system recommends 10 items, 7 are liked (TP), 3 are not (FP). If there are 5 liked items not recommended (FN), and many irrelevant items not recommended (TN).

Precision vs Recall tradeoff with examples

User-based filtering often has higher recall because it finds items liked by similar users, but may recommend some irrelevant items, lowering precision.

Item-based filtering tends to have higher precision because it recommends items similar to what the user liked, but may miss some relevant items, lowering recall.

Example: For a movie app, user-based may suggest more diverse movies (higher recall), but some may not fit user taste (lower precision). Item-based suggests movies very similar to watched ones (higher precision), but fewer new discoveries (lower recall).

What good vs bad metric values look like

Good metrics: Precision and recall both above 0.7 means recommendations are mostly relevant and cover many liked items.

Bad metrics: Precision below 0.4 means many irrelevant items are recommended. Recall below 0.3 means many liked items are missed.

For example, a user-based system with precision 0.75 and recall 0.8 is good. An item-based system with precision 0.85 but recall 0.25 may miss many relevant items.

Common pitfalls in metrics
  • Accuracy paradox: High accuracy can be misleading if most items are irrelevant and not recommended.
  • Data leakage: Using future user data in training inflates metrics falsely.
  • Overfitting: Model fits training users/items too closely but fails on new users/items, causing poor real-world metrics.
  • Ignoring diversity: High precision but recommending very similar items can bore users.
Self-check question

Your recommendation model has 98% accuracy but only 12% recall on relevant items. Is it good for production?

Answer: No. The high accuracy is misleading because most items are irrelevant and not recommended. The very low recall means the model misses most items users would like, so it fails to provide useful recommendations.

Key Result
Precision and recall are key to evaluate user-based vs item-based recommendation quality, balancing relevance and coverage.