Complete the code to load a pre-trained word embedding model using gensim.
from gensim.models import KeyedVectors model = KeyedVectors.load_word2vec_format('[1]', binary=True)
The GoogleNews-vectors-negative300.bin file is a common pre-trained Word2Vec binary model used with gensim.
Complete the code to find the top 3 words most similar to 'king' using the model.
similar_words = model.most_similar('[1]', topn=3)
The method most_similar expects the target word as input, here 'king'.
Fix the error in the analogy code to find the word that fits: 'man' is to 'king' as 'woman' is to ____.
result = model.most_similar(positive=['king', '[1]'], negative=['man'], topn=1)
In the analogy, 'woman' is the positive word to add, while 'man' is subtracted.
Fill both blanks to create a dictionary of words and their similarity scores to 'computer', filtering only words with similarity greater than 0.7.
similarity_dict = {word: [1] for word, score in model.most_similar('[2]', topn=10) if score > 0.7}We want the similarity score as value and 'computer' as the target word.
Fill all three blanks to create a list of words from the model's vocabulary that have length greater than 5 and contain the letter 'a'.
filtered_words = [[1] for [2] in model.index_to_key if [3]]
The list comprehension iterates over words, selects words with 'a' and length > 5, and collects the word itself.