0
0
ML Pythonml~20 mins

Content-based filtering in ML Python - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Content-based Filtering Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
How does content-based filtering recommend items?
Which of the following best describes how content-based filtering recommends items to a user?
AIt recommends items similar to those the user liked before based on item features.
BIt recommends items liked by other users with similar tastes.
CIt randomly recommends popular items regardless of user preferences.
DIt recommends items based on the time of day and user location.
Attempts:
2 left
💡 Hint
Think about how the system uses the user's past preferences and item details.
Predict Output
intermediate
2:00remaining
Output of cosine similarity calculation
What is the output of this Python code that calculates cosine similarity between two item feature vectors?
ML Python
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np

item1 = np.array([[1, 0, 1, 0]])
item2 = np.array([[0, 1, 1, 0]])
similarity = cosine_similarity(item1, item2)
print(round(similarity[0][0], 2))
A0.33
B0.50
C0.67
D0.75
Attempts:
2 left
💡 Hint
Recall cosine similarity formula and count overlapping features.
Model Choice
advanced
2:00remaining
Best model for content-based filtering with sparse user-item data
Which model is most suitable for content-based filtering when user-item interaction data is sparse but item features are rich?
ACollaborative filtering using user similarity
BMatrix factorization using only user-item ratings
CK-Nearest Neighbors using item feature vectors
DRandom forest classifier on user demographic data
Attempts:
2 left
💡 Hint
Focus on models that use item features directly.
Hyperparameter
advanced
2:00remaining
Choosing the number of neighbors in content-based KNN
In a content-based filtering system using K-Nearest Neighbors on item features, what is the effect of increasing the number of neighbors (k)?
AIt increases recommendation diversity but may reduce relevance.
BIt always improves recommendation accuracy.
CIt decreases the number of recommended items.
DIt causes the model to ignore item features.
Attempts:
2 left
💡 Hint
Think about how more neighbors affect similarity and variety.
🔧 Debug
expert
3:00remaining
Why does this content-based filtering code produce identical recommendations for all users?
Given this simplified content-based filtering code, why do all users get the same recommended items? ```python import numpy as np user_profiles = { 'user1': np.array([1, 0, 1]), 'user2': np.array([1, 0, 1]), 'user3': np.array([1, 0, 1]) } item_features = { 'itemA': np.array([1, 0, 0]), 'itemB': np.array([0, 1, 1]), 'itemC': np.array([1, 0, 1]) } recommendations = {} for user, profile in user_profiles.items(): scores = {} for item, features in item_features.items(): scores[item] = np.dot(profile, features) recommended = sorted(scores, key=scores.get, reverse=True)[:2] recommendations[user] = recommended print(recommendations) ```
AThe code uses Euclidean distance instead of dot product.
BThe item features are normalized incorrectly causing identical scores.
CThe sorting function is not stable, causing random recommendations.
DAll user profiles are identical, so dot products and recommendations are the same.
Attempts:
2 left
💡 Hint
Check the user profiles carefully.