Complete the code to tokenize a sentence using NLTK.
import nltk nltk.download('punkt') sentence = "Hello world!" tokens = nltk.word_tokenize([1]) print(tokens)
The word_tokenize function takes a string input to split into words. Here, we pass the variable sentence.
Complete the code to load the English model in spaCy.
import spacy nlp = spacy.load([1]) doc = nlp("This is a test.") print([token.text for token in doc])
spaCy English small model is loaded with the name "en_core_web_sm".
Fix the error in the Hugging Face pipeline code to perform sentiment analysis.
from transformers import pipeline sentiment = pipeline([1]) result = sentiment("I love learning NLP!") print(result)
The pipeline task name must be a string with hyphens, so "sentiment-analysis" is correct.
Fill both blanks to create a dictionary comprehension that maps words to their lengths for words longer than 3 characters.
words = ["apple", "is", "good", "for", "you"] lengths = {word: [1] for word in words if len(word) [2] 3} print(lengths)
The dictionary maps each word to its length using len(word). The condition filters words with length greater than 3 using >.
Fill all three blanks to create a dictionary comprehension that maps uppercase words to their lengths for words longer than 3 characters.
words = ["apple", "is", "good", "for", "you"] result = [1]: [2] for word in words if len(word) [3] 3 print(result)
The dictionary keys are uppercase words using word.upper(). Values are lengths with len(word). The condition filters words longer than 3 using >.