Complete the code to create an embedding vector for a word using a simple lookup.
embedding_vector = embedding_matrix[[1]]The embedding matrix uses the word's index to find its vector representation.
Complete the code to calculate cosine similarity between two embedding vectors.
similarity = (vec1 @ vec2) / ([1](vec1) * np.linalg.norm(vec2))Cosine similarity divides the dot product by the product of the vectors' norms.
Fix the error in the code that normalizes an embedding vector.
normalized_vec = vec / np.linalg.[1](vec)Normalization divides by the vector's norm to scale it to length 1.
Fill both blanks to create a dictionary of word embeddings for words longer than 3 letters.
word_embeddings = {word: [1] for word in words if len(word) [2] 3}We get the embedding by indexing the matrix with the word's index, and filter words longer than 3 letters.
Fill all three blanks to create a filtered dictionary of embeddings where the vector norm is greater than 0.5.
filtered_embeddings = { [1]: [2] for [3] in word_embeddings if np.linalg.norm(word_embeddings[[1]]) > 0.5 }We use 'word' as key, 'embedding' as value, and iterate over 'word' in the dictionary.