Which of the following best describes syntactic ambiguity in natural language processing?
Think about how sentence structure can change meaning.
Syntactic ambiguity happens when a sentence's structure allows multiple interpretations, like 'I saw the man with the telescope'.
You want to build a system that identifies names of people, places, and organizations in text. Which model type is most suitable?
Think about models good at understanding sequences of words.
RNNs and Transformers are designed to process sequences, making them ideal for tasks like named entity recognition.
Which metric is most appropriate to evaluate a language model's ability to predict the next word in a sentence?
Consider a metric that measures uncertainty in predictions.
Perplexity measures how surprised the model is by the actual next word; lower perplexity means better prediction.
A neural machine translation model produces fluent but incorrect translations. Which issue is most likely causing this?
Think about what affects the model's knowledge of language pairs.
Limited or non-diverse training data leads to poor understanding and incorrect translations despite fluent output.
When training a Transformer model for language tasks, which hyperparameter adjustment is most effective to reduce overfitting?
Think about techniques that prevent the model from memorizing training data.
Increasing dropout helps prevent overfitting by making the model less reliant on specific neurons.