0
0
NLPml~20 mins

Word2Vec (CBOW and Skip-gram) in NLP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Word2Vec Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
Difference between CBOW and Skip-gram in Word2Vec

Which statement correctly describes the main difference between the CBOW and Skip-gram models in Word2Vec?

ACBOW predicts the target word from surrounding context words, while Skip-gram predicts surrounding context words from the target word.
BCBOW is used only for large datasets, while Skip-gram works only on small datasets.
CCBOW uses one-hot encoding for words, while Skip-gram uses word embeddings directly as input.
DCBOW predicts the next word in a sentence, while Skip-gram predicts the previous word.
Attempts:
2 left
💡 Hint

Think about which model uses context to predict the center word and which uses the center word to predict context.

Predict Output
intermediate
2:00remaining
Output of a simple Skip-gram training step

Given the following simplified Skip-gram training code snippet, what will be the shape of the output vector representing the predicted context word probabilities?

NLP
import numpy as np

vocab_size = 10
embedding_dim = 5

# Random embedding matrix
embeddings = np.random.rand(vocab_size, embedding_dim)

# One-hot encoded center word (index 3)
center_word = np.zeros(vocab_size)
center_word[3] = 1

# Compute hidden layer (embedding lookup)
hidden = embeddings.T @ center_word  # shape (embedding_dim,)

# Output weights
output_weights = np.random.rand(vocab_size, embedding_dim)

# Compute output layer
output = output_weights @ hidden  # shape ?

print(output.shape)
A(5,)
B(10,)
C(1, 10)
D(10, 5)
Attempts:
2 left
💡 Hint

Consider the matrix multiplication dimensions: output_weights (vocab_size x embedding_dim) times hidden (embedding_dim,).

Model Choice
advanced
1:30remaining
Choosing Word2Vec model for rare words

You want to train word embeddings on a small dataset with many rare words. Which Word2Vec model is generally better at learning good embeddings for rare words?

ACBOW, because it averages context and smooths rare word signals.
BSkip-gram, because it ignores rare words during training.
CSkip-gram, because it predicts context from the target word and better captures rare word representations.
DCBOW, because it uses hierarchical softmax which is faster for rare words.
Attempts:
2 left
💡 Hint

Think about which model focuses more on individual target words and their contexts.

Hyperparameter
advanced
1:30remaining
Effect of window size in Word2Vec training

In Word2Vec training, what is the effect of increasing the window size parameter?

AIt controls the learning rate decay during training.
BIt decreases the number of context words, focusing on very close neighbors only.
CIt changes the embedding dimension size, making vectors longer.
DIt increases the number of context words considered, capturing broader semantic relationships but may add noise.
Attempts:
2 left
💡 Hint

Window size defines how many words around the target word are used as context.

Metrics
expert
2:00remaining
Evaluating Word2Vec embeddings with analogy task

After training Word2Vec embeddings, you want to evaluate them using the analogy task: "king is to queen as man is to ?". Which metric best measures the quality of the embeddings on this task?

ACosine similarity between the vector (queen - king + man) and all other word vectors to find the closest match.
BEuclidean distance between the vector (king + queen) and (man + woman).
CDot product between the embeddings of 'king' and 'queen' only.
DMean squared error between predicted and true word indices.
Attempts:
2 left
💡 Hint

Think about how analogy tasks use vector arithmetic and similarity measures.