0
0
ML Pythonml~20 mins

User-based vs item-based in ML Python - Practice Questions

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Collaborative Filtering Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Difference in similarity calculation between user-based and item-based collaborative filtering

In collaborative filtering, similarity is a key concept. Which statement correctly describes how similarity is calculated differently in user-based versus item-based methods?

AUser-based calculates similarity between items based on user ratings, while item-based calculates similarity between users based on item ratings.
BUser-based calculates similarity between users based on their item ratings, while item-based calculates similarity between items based on user ratings.
CBoth user-based and item-based calculate similarity only between users, ignoring items.
DBoth user-based and item-based calculate similarity only between items, ignoring users.
Attempts:
2 left
💡 Hint

Think about who is being compared in each method: users or items.

Model Choice
intermediate
2:00remaining
Choosing between user-based and item-based collaborative filtering for a large user base

You have a recommendation system with millions of users but only a few thousand items. Which collaborative filtering approach is generally more efficient and why?

AUser-based, because it ignores item data and focuses only on user profiles.
BUser-based, because millions of users provide more data to compute accurate user similarities quickly.
CItem-based, because it requires no similarity calculations.
DItem-based, because the number of items is smaller, so computing item similarities is faster and more stable.
Attempts:
2 left
💡 Hint

Consider which similarity matrix is smaller and easier to compute.

Metrics
advanced
2:00remaining
Evaluating recommendation accuracy for user-based vs item-based methods

You run both user-based and item-based collaborative filtering on the same dataset. Which metric would best help you compare their prediction accuracy on unseen user-item ratings?

ATotal number of items recommended.
BNumber of users in the dataset.
CRoot Mean Squared Error (RMSE) between predicted and actual ratings.
DTraining time of the model.
Attempts:
2 left
💡 Hint

Think about a metric that measures how close predictions are to real ratings.

🔧 Debug
advanced
2:00remaining
Identifying the cause of sparse similarity matrix in user-based filtering

You implemented user-based collaborative filtering but notice the user similarity matrix is very sparse, causing poor recommendations. What is the most likely cause?

AUsers have rated very few common items, so similarity cannot be reliably computed.
BThe model is overfitting due to too many training epochs.
CThe dataset contains too many items, causing memory overflow.
DThe item similarity matrix was used instead of the user similarity matrix by mistake.
Attempts:
2 left
💡 Hint

Think about what is needed to calculate similarity between two users.

🧠 Conceptual
expert
2:00remaining
Impact of cold start problem on user-based vs item-based collaborative filtering

Which statement best explains how the cold start problem affects user-based and item-based collaborative filtering differently?

AUser-based struggles more with new users who have no ratings, while item-based struggles more with new items that have no user ratings.
BUser-based struggles more with new items, while item-based struggles more with new users.
CBoth user-based and item-based methods are equally unaffected by cold start problems.
DCold start only affects content-based filtering, not collaborative filtering.
Attempts:
2 left
💡 Hint

Consider what data each method needs to make recommendations.