Bird
0
0

In the Skip-gram architecture of Word2Vec, which of the following best describes the relationship between input and output?

easy📝 Conceptual Q2 of 15
NLP - Word Embeddings
In the Skip-gram architecture of Word2Vec, which of the following best describes the relationship between input and output?
AThe model takes a center word as input and predicts its surrounding context words.
BThe model takes context words as input and predicts the center word.
CThe model takes a sentence as input and predicts the next sentence.
DThe model takes a word embedding as input and outputs a probability distribution over vocabulary.
Step-by-Step Solution
Solution:
  1. Step 1: Understand Skip-gram model

    The Skip-gram model predicts context words given a center word.
  2. Step 2: Contrast with CBOW

    CBOW predicts the center word from context words, so The model takes context words as input and predicts the center word. is CBOW, not Skip-gram.
  3. Final Answer:

    The model takes a center word as input and predicts its surrounding context words. -> Option A
  4. Quick Check:

    Skip-gram input = center word, output = context words [OK]
Quick Trick: Skip-gram predicts context from center word [OK]
Common Mistakes:
MISTAKES
  • Confusing Skip-gram with CBOW input-output roles
  • Assuming Skip-gram predicts next word in sequence

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More NLP Quizzes