0
0
NLPml~5 mins

Visualizing embeddings (t-SNE) in NLP - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is the main purpose of t-SNE in visualizing embeddings?
t-SNE helps to reduce high-dimensional data (like word embeddings) into 2 or 3 dimensions so we can see patterns and clusters easily on a simple plot.
Click to reveal answer
beginner
Why can't we just plot embeddings directly without t-SNE?
Embeddings usually have many dimensions (like 100 or 300), which we can't visualize directly. t-SNE reduces these dimensions while keeping similar points close together.
Click to reveal answer
beginner
What does it mean when points are close together in a t-SNE plot of embeddings?
Points close together mean their original embeddings are similar, so the words or items they represent are related or have similar meanings.
Click to reveal answer
intermediate
What is a common challenge when using t-SNE for embedding visualization?
t-SNE can be slow on large datasets and sometimes shows different results each time because it uses randomness in its calculations.
Click to reveal answer
intermediate
Name one alternative to t-SNE for visualizing embeddings.
UMAP is a popular alternative that is faster and often preserves more of the global structure in the data.
Click to reveal answer
What does t-SNE primarily do with high-dimensional embeddings?
AReduce dimensions to 2 or 3 for visualization
BIncrease dimensions for better accuracy
CConvert embeddings into text
DRemove noise from embeddings
In a t-SNE plot, what does it mean if two points are far apart?
AThey represent the same word
BTheir embeddings are very different
CThey have identical meanings
DThey are errors in the data
Which of these is a limitation of t-SNE?
AIt always produces the same output
BIt increases data dimensions
CIt can be slow on large datasets
DIt removes important data features
Which alternative method is known for faster embedding visualization than t-SNE?
ALinear Regression
BPCA
CK-Means
DUMAP
Why do we use 2D or 3D plots for embeddings?
ABecause humans can easily understand 2D or 3D visuals
BBecause embeddings only have 2 or 3 dimensions
CBecause 2D plots increase embedding accuracy
DBecause 3D plots remove noise
Explain how t-SNE helps in understanding word embeddings.
Think about how we can see relationships between words visually.
You got /4 concepts.
    Describe one limitation of t-SNE and a possible alternative method.
    Consider speed and consistency issues.
    You got /4 concepts.