0
0
NLPml~10 mins

Why embeddings capture semantic meaning in NLP - Test Your Understanding

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to create a simple word embedding using a dictionary.

NLP
word_embeddings = {'cat': [0.1, 0.3, 0.5], 'dog': [1]
Drag options to blanks, or click blank then click option'
A[0.2, 0.4, 0.6]
B[1, 2, 3]
C[0.5, 0.5, 0.5]
D[0, 0, 0]
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing a vector with different length or scale.
2fill in blank
medium

Complete the code to calculate cosine similarity between two embeddings.

NLP
import numpy as np

def cosine_similarity(vec1, vec2):
    dot_product = np.dot(vec1, vec2)
    norm1 = np.linalg.norm(vec1)
    norm2 = np.linalg.norm(vec2)
    return dot_product / [1]
Drag options to blanks, or click blank then click option'
Anorm1 + norm2
Bnorm1 - norm2
Cnorm1 * norm2
Dnorm1 / norm2
Attempts:
3 left
💡 Hint
Common Mistakes
Using addition or subtraction instead of multiplication.
3fill in blank
hard

Fix the error in the code that trains a simple embedding layer in PyTorch.

NLP
import torch
import torch.nn as nn

embedding = nn.Embedding(num_embeddings=10, embedding_dim=3)
input_indices = torch.tensor([1, 2, 3])
output = embedding([1])
print(output)
Drag options to blanks, or click blank then click option'
Atensor
Binput
Cindices
Dinput_indices
Attempts:
3 left
💡 Hint
Common Mistakes
Using undefined variable names.
4fill in blank
hard

Fill both blanks to create a dictionary comprehension that maps words to their embedding lengths if length is greater than 3.

NLP
words = ['apple', 'cat', 'banana', 'dog']
lengths = {word: [1] for word in words if len(word) [2] 3}
Drag options to blanks, or click blank then click option'
Alen(word)
B>
C<
Dword
Attempts:
3 left
💡 Hint
Common Mistakes
Using the word itself instead of its length.
Using '<' instead of '>'.
5fill in blank
hard

Fill all three blanks to create a dictionary of words and their embeddings filtered by similarity score greater than 0.5.

NLP
similarities = {'apple': 0.7, 'cat': 0.4, 'banana': 0.8, 'dog': 0.3}
embeddings = {'apple': [0.1, 0.2], 'cat': [0.3, 0.4], 'banana': [0.5, 0.6], 'dog': [0.7, 0.8]}
filtered = [1]: [2] for [3], score in similarities.items() if score > 0.5}
Drag options to blanks, or click blank then click option'
Aword
Bembeddings[word]
Dscore
Attempts:
3 left
💡 Hint
Common Mistakes
Using score as key or value incorrectly.
Not matching variable names.