Complete the code to import the library used for extractive summarization.
from sklearn.feature_extraction.text import [1]
The TfidfVectorizer converts text to a matrix of TF-IDF features, which is commonly used in extractive summarization.
Complete the code to split the text into sentences for summarization.
import nltk nltk.download('punkt') sentences = nltk.tokenize.[1](text)
sent_tokenize splits text into sentences, which is essential for extractive summarization.
Fix the error in the code to compute cosine similarity matrix for sentence vectors.
from sklearn.metrics.pairwise import [1] similarity_matrix = cosine_similarity(sentence_vectors)
cosine_similarity computes similarity between vectors, which is used to find sentence similarity in extractive summarization.
Fill both blanks to rank sentences using PageRank algorithm.
import networkx as nx sentence_graph = nx.[1](similarity_matrix) scores = nx.[2](sentence_graph)
We create a graph from the similarity matrix using from_numpy_array and rank sentences with pagerank.
Fill all three blanks to select top sentences and join them as summary.
top_sentences = sorted(((scores[i], s) for i, s in enumerate(sentences)), reverse=True)[:[1]] summary = ' '.join([[2] for _, [3] in top_sentences])
We select top 3 sentences, then join the sentence strings (variable s) to form the summary.