Complete the code to load a pre-trained sentence transformer model.
from sentence_transformers import SentenceTransformer model = SentenceTransformer('[1]')
The correct model name for a popular sentence transformer is all-MiniLM-L6-v2. Other options are either not sentence transformer models or unrelated.
Complete the code to encode a list of sentences into embeddings.
sentences = ['Hello world', 'Machine learning is fun'] embeddings = model.[1](sentences)
The encode method converts sentences into vector embeddings suitable for similarity tasks.
Fix the error in the code to compute cosine similarity between two sentence embeddings.
from sklearn.metrics.pairwise import [1] similarity = cosine_similarity([emb1], [emb2])[0][0]
The function cosine_similarity computes the cosine similarity between vectors, which is the correct metric here.
Fill both blanks to create a dictionary of sentence embeddings for given sentences.
sentences = ['AI is amazing', 'I love coding'] embeddings = model.encode(sentences) embedding_dict = {sentences[[1]]: embeddings[[2]] for i in range(len(sentences))}
We use the loop variable i to index both sentences and embeddings to build the dictionary.
Fill all three blanks to filter sentences with embeddings having norm greater than 1.0.
import numpy as np filtered = {sent: emb for sent, emb in zip(sentences, embeddings) if np.linalg.[1](emb) [2] [3]
The np.linalg.norm function computes the length of the embedding vector. We filter embeddings with norm greater than 1.0.