Bird
0
0

Why are vector embeddings used to represent words or sentences in NLP tasks involving semantic similarity?

easy📝 Conceptual Q1 of 15
NLP - Text Similarity and Search
Why are vector embeddings used to represent words or sentences in NLP tasks involving semantic similarity?
ABecause embeddings capture contextual meaning in a continuous vector space
BBecause embeddings convert text into binary code for faster processing
CBecause embeddings remove all semantic information to simplify data
DBecause embeddings only store word frequency counts
Step-by-Step Solution
Solution:
  1. Step 1: Understand embeddings

    Embeddings map words or sentences into dense vectors that encode semantic information.
  2. Step 2: Purpose in semantic similarity

    These vectors allow comparison of meaning by measuring distances or angles between them.
  3. Final Answer:

    Because embeddings capture contextual meaning in a continuous vector space -> Option A
  4. Quick Check:

    Embeddings represent meaning, not just frequency or binary code. [OK]
Quick Trick: Embeddings encode meaning as vectors for similarity [OK]
Common Mistakes:
MISTAKES
  • Thinking embeddings are simple frequency counts
  • Assuming embeddings discard semantic info
  • Believing embeddings are binary representations

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More NLP Quizzes