0
0
NLPml~5 mins

GloVe embeddings in NLP - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What does GloVe stand for in GloVe embeddings?
GloVe stands for Global Vectors for Word Representation. It is a method to create word embeddings by capturing global word co-occurrence statistics.
Click to reveal answer
intermediate
How does GloVe differ from Word2Vec in learning word embeddings?
GloVe uses a matrix factorization approach on the global word co-occurrence matrix, while Word2Vec learns embeddings by predicting words in local context windows using a neural network.
Click to reveal answer
beginner
What is the main input data structure used by GloVe to learn embeddings?
GloVe uses a word co-occurrence matrix that counts how often pairs of words appear together in a large text corpus.
Click to reveal answer
beginner
Why are GloVe embeddings useful in natural language processing tasks?
They capture semantic relationships between words by encoding how frequently words co-occur globally, helping models understand word meanings and similarities.
Click to reveal answer
intermediate
What kind of mathematical operation does GloVe use to learn word vectors from the co-occurrence matrix?
GloVe performs matrix factorization by minimizing a weighted least squares objective to find word vectors that reconstruct the co-occurrence counts.
Click to reveal answer
What is the main data structure GloVe uses to learn word embeddings?
AWord frequency list
BPart-of-speech tags
CWord co-occurrence matrix
DDependency parse trees
Which of the following best describes GloVe's approach?
AMatrix factorization of global co-occurrence counts
BUsing recurrent neural networks
CClustering words by frequency
DPredicting words from local context windows
What does a GloVe embedding vector represent?
AThe frequency of a word in the corpus
BSemantic meaning based on global word co-occurrence
CThe position of a word in a sentence
DThe length of a word
Which is a key advantage of GloVe embeddings?
AThey capture global statistical information
BThey require no training data
CThey only consider immediate neighbors
DThey are random vectors
What kind of loss function does GloVe minimize during training?
AMean absolute error
BCross-entropy loss
CHinge loss
DWeighted least squares loss
Explain in your own words how GloVe embeddings are created from a text corpus.
Think about how often words appear together and how that information is turned into vectors.
You got /4 concepts.
    Describe the main difference between GloVe and Word2Vec embeddings.
    Focus on what data each method uses to learn embeddings.
    You got /4 concepts.