Few-shot learning with prompts allows a model to perform new tasks with very few examples. What is the primary advantage of this approach compared to traditional supervised learning?
Think about how few-shot learning reduces the need for large datasets.
Few-shot learning with prompts helps models adapt to new tasks using only a few examples, reducing the need for large labeled datasets.
Given the following prompt to a language model, what is the expected output?
prompt = '''Translate English to French: English: "I love apples." French: "J'aime les pommes." English: "She is happy." French: ''' # Model generates the French translation for "She is happy."
Consider the correct French translation for "She is happy."
The correct French translation of "She is happy." is "Elle est heureuse." which matches option B.
You want to perform few-shot learning on a text classification task using prompts. Which model type is best suited for this?
Few-shot learning with prompts works best with large pretrained language models.
Large pretrained language models like GPT-3 have learned broad language patterns and can adapt to new tasks with few examples using prompts.
When using few-shot learning with prompts, which hyperparameter most directly affects how many examples are shown in the prompt?
Think about the term that describes how many examples you provide in the prompt.
The number of shots refers to how many example input-output pairs are included in the prompt to guide the model.
You have a few-shot prompt-based model for sentiment classification. Which metric is most appropriate to evaluate its performance on a balanced test set?
Consider the task is classification and the test set is balanced.
Accuracy measures the proportion of correct predictions and is suitable for balanced classification tasks.