Natural Language Processing (NLP) allows computers to interpret and generate human language. Which of the following best explains how NLP bridges the gap between humans and computers?
Think about how computers need structured data but humans speak naturally.
NLP converts natural human language into structured data that computers can analyze and respond to, enabling meaningful interaction.
Given the Python code below using the nltk library, what is the output?
import nltk nltk.download('punkt', quiet=True) from nltk.tokenize import word_tokenize text = "Hello, world! NLP bridges humans and computers." tokens = word_tokenize(text) print(tokens)
Tokenization splits text into words and punctuation separately.
The word_tokenize function splits the sentence into words and punctuation marks as separate tokens.
You want a model that understands the context and meaning of sentences for tasks like question answering. Which model type is most suitable?
Consider models that handle word order and context.
RNNs process words in order and keep track of context, making them suitable for understanding sentence meaning.
You have trained a language model to predict the next word in a sentence. Which metric best evaluates its performance?
Think about a metric that measures uncertainty in language models.
Perplexity measures how well a language model predicts a sequence; lower perplexity means better predictions.
Consider the Python code below using TextBlob for sentiment analysis. Why does it raise an error?
from textblob import TextBlob text = None blob = TextBlob(text) sentiment = blob.sentiment.polarity print(sentiment)
Check the type of the input variable text.
TextBlob expects a string input. Passing None causes a TypeError because it cannot process non-string types.