Complete the code to compute cosine similarity between two vectors.
from sklearn.metrics.pairwise import [1] vector_a = [[1, 2, 3]] vector_b = [[4, 5, 6]] similarity = [1](vector_a, vector_b) print(similarity)
The cosine_similarity function calculates the cosine similarity between vectors, which is commonly used in similarity search.
Complete the code to find the index of the most similar vector in a list using cosine similarity.
import numpy as np from sklearn.metrics.pairwise import cosine_similarity vectors = np.array([[1, 0], [0, 1], [1, 1]]) query = np.array([[0.9, 0.1]]) similarities = cosine_similarity(query, vectors) most_similar_index = np.argmax([1]) print(most_similar_index)
The similarities array contains similarity scores between the query and each vector. Using np.argmax on it finds the index of the highest similarity.
Fix the error in the code to correctly compute Euclidean distances between vectors.
from sklearn.metrics.pairwise import [1] vectors = [[1, 2], [3, 4], [5, 6]] distances = [1](vectors) print(distances)
The euclidean_distances function computes the Euclidean distance matrix between vectors. The original code likely used a wrong function causing errors or wrong output.
Fill both blanks to create a dictionary of word lengths for words longer than 3 characters.
words = ['apple', 'bat', 'carrot', 'dog', 'elephant'] lengths = {word: [1] for word in words if len(word) [2] 3} print(lengths)
The dictionary comprehension uses len(word) to get word lengths and filters words with length greater than 3 using >.
Fill all three blanks to create a filtered dictionary with uppercase keys and values greater than 2.
data = {'a': 1, 'b': 3, 'c': 5, 'd': 2}
filtered = [1]: [2] for k, v in data.items() if v [3] 2}
print(filtered)The dictionary comprehension converts keys to uppercase with k.upper(), keeps values as v, and filters for values greater than 2 using >.