0
0
NLPml~20 mins

Choosing number of topics in NLP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Topic Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
Why is choosing the number of topics important in topic modeling?

In topic modeling, selecting the number of topics affects the model's usefulness. Why is this choice important?

ABecause too few topics may merge distinct themes, and too many topics may split themes unnecessarily.
BBecause the number of topics determines the size of the input data.
CBecause the number of topics controls the speed of the training algorithm only.
DBecause the number of topics decides the number of words in the vocabulary.
Attempts:
2 left
💡 Hint

Think about how topics represent themes in the data.

Metrics
intermediate
1:30remaining
Which metric helps decide the optimal number of topics?

When training topic models, which metric is commonly used to evaluate and choose the best number of topics?

AMean Squared Error, which measures prediction error.
BAccuracy, which measures correct topic labels.
CF1 Score, which balances precision and recall.
DPerplexity, which measures how well the model predicts unseen data.
Attempts:
2 left
💡 Hint

Think about a metric that measures prediction quality on new data.

Predict Output
advanced
2:00remaining
Output of perplexity calculation for different topic numbers

Given the code below that computes perplexity for different numbers of topics, what is the output?

NLP
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import CountVectorizer

texts = ["apple banana fruit", "banana orange fruit", "car truck vehicle", "truck bus vehicle"]
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(texts)

perplexities = {}
for n_topics in [2, 3]:
    lda = LatentDirichletAllocation(n_components=n_topics, random_state=0)
    lda.fit(X)
    perplexities[n_topics] = lda.perplexity(X)

print(perplexities)
A{2: 9.0, 3: 9.0}
B{2: 9.0, 3: 7.5}
C{2: 7.5, 3: 9.0}
D{2: 7.5, 3: 7.5}
Attempts:
2 left
💡 Hint

Lower perplexity means better model fit.

Model Choice
advanced
1:30remaining
Choosing number of topics using coherence score

You want to select the number of topics for an LDA model using coherence score. Which approach is best?

ASelect the topic count randomly and trust the model to adjust.
BPick the topic count with lowest perplexity without checking coherence.
CTrain models with different topic counts and pick the one with highest coherence score.
DChoose the topic count based on the largest vocabulary size.
Attempts:
2 left
💡 Hint

Coherence measures how interpretable topics are.

🔧 Debug
expert
2:00remaining
Why does increasing topics beyond a point worsen model quality?

After increasing the number of topics in your LDA model beyond 10, you notice coherence scores drop and topics become less meaningful. What is the most likely cause?

AThe model is overfitting by splitting coherent topics into smaller, less meaningful ones.
BThe model is underfitting because it has too few topics to capture data complexity.
CThe vocabulary size is too small to support more topics.
DThe training data is too large, causing the model to fail.
Attempts:
2 left
💡 Hint

Think about what happens when a model tries to create too many topics.