Complete the code to import the main NLP library used for processing human language.
import [1]
The nltk library is a popular Python toolkit for working with human language data, making it essential for NLP tasks.
Complete the code to tokenize a sentence into words using NLTK.
from nltk.tokenize import word_tokenize sentence = 'Hello world!' tokens = [1](sentence)
The function word_tokenize splits a sentence into individual words or tokens, which is a basic step in NLP.
Fix the error in the code to convert all tokens to lowercase.
tokens = ['Hello', 'World'] lower_tokens = [token.[1]() for token in tokens]
The lower() method converts strings to lowercase, which helps normalize text for NLP tasks.
Fill both blanks to create a dictionary of word lengths for words longer than 3 letters.
words = ['chat', 'ai', 'language', 'nlp'] lengths = {word: [1] for word in words if len(word) [2] 3}
The dictionary comprehension maps each word to its length using len(word), but only includes words with length greater than 3 using >.
Fill all three blanks to create a dictionary of uppercase words and their lengths for words longer than 2 letters.
words = ['data', 'ai', 'ml', 'python'] result = { [1]: [2] for word in words if len(word) [3] 2 }
The dictionary comprehension uses word.upper() as keys, len(word) as values, and filters words with length greater than 2 using >.