Complete the code to create a simple word embedding using a dictionary.
word_embeddings = {'cat': [0.1, 0.3, 0.5], 'dog': [1]The embedding for 'dog' should be a vector similar in size and scale to 'cat' to capture semantic meaning.
Complete the code to calculate cosine similarity between two embeddings.
import numpy as np def cosine_similarity(vec1, vec2): dot_product = np.dot(vec1, vec2) norm1 = np.linalg.norm(vec1) norm2 = np.linalg.norm(vec2) return dot_product / [1]
Cosine similarity divides the dot product by the product of the norms of the two vectors.
Fix the error in the code that trains a simple embedding layer in PyTorch.
import torch import torch.nn as nn embedding = nn.Embedding(num_embeddings=10, embedding_dim=3) input_indices = torch.tensor([1, 2, 3]) output = embedding([1]) print(output)
The embedding layer expects a tensor of indices as input, here named 'input_indices'.
Fill both blanks to create a dictionary comprehension that maps words to their embedding lengths if length is greater than 3.
words = ['apple', 'cat', 'banana', 'dog'] lengths = {word: [1] for word in words if len(word) [2] 3}
The comprehension calculates the length of each word and filters words longer than 3 characters.
Fill all three blanks to create a dictionary of words and their embeddings filtered by similarity score greater than 0.5.
similarities = {'apple': 0.7, 'cat': 0.4, 'banana': 0.8, 'dog': 0.3}
embeddings = {'apple': [0.1, 0.2], 'cat': [0.3, 0.4], 'banana': [0.5, 0.6], 'dog': [0.7, 0.8]}
filtered = [1]: [2] for [3], score in similarities.items() if score > 0.5}The dictionary comprehension uses the word as key, its embedding as value, and filters by score > 0.5.