Complete the code to tokenize the sentence into words using NLTK.
from nltk.tokenize import word_tokenize sentence = "I love learning AI!" tokens = [1](sentence) print(tokens)
The word_tokenize function splits a sentence into individual words (tokens).
Complete the code to stem the word using PorterStemmer.
from nltk.stem import PorterStemmer stemmer = PorterStemmer() word = "running" stemmed_word = stemmer.[1](word) print(stemmed_word)
The stem method reduces a word to its root form by chopping off suffixes.
Fix the error in the code to lemmatize the word correctly using WordNetLemmatizer.
from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() word = "better" lemma = lemmatizer.[1](word, pos='a') print(lemma)
The lemmatize method returns the base form of a word considering its part of speech.
Fill both blanks to create a dictionary of word stems for words longer than 4 characters.
from nltk.stem import PorterStemmer stemmer = PorterStemmer() words = ['running', 'jumps', 'easily', 'fairly'] stem_dict = {word: [1] for word in words if len(word) [2] 4} print(stem_dict)
The dictionary comprehension applies stemmer.stem(word) to words with length greater than 4.
Fill both blanks to create a dictionary of lemmas for words longer than 5 characters.
from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() words = ['running', 'jumps', 'easily', 'fairly'] lemma_dict = {word: lemmatizer.[1](word, pos='r') for word in words if len(word) [2] 5} print(lemma_dict)
The dictionary comprehension maps each word to its lemma using lemmatize for words longer than 5 characters. The first blank is empty because the key is just the word itself.