Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to import the library used for natural language processing.
Prompt Engineering / GenAI
import [1]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Importing unrelated libraries like numpy or matplotlib.
Using tensorflow which is more general for deep learning but not specific for language models.
✗ Incorrect
The 'transformers' library is commonly used for working with language models, which are essential for hallucination detection.
2fill in blank
mediumComplete the code to load a pre-trained language model for hallucination detection.
Prompt Engineering / GenAI
model = transformers.AutoModelForSequenceClassification.from_pretrained('[1]')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing image models like resnet50 or vgg16.
Choosing GPT-2 which is a generative model, not directly for classification.
✗ Incorrect
The 'bert-base-uncased' model is a common choice for sequence classification tasks like hallucination detection.
3fill in blank
hardFix the error in the code to tokenize input text correctly for the model.
Prompt Engineering / GenAI
inputs = tokenizer('[1]', return_tensors='pt')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing a list of words instead of a string.
Passing a list containing the string.
✗ Incorrect
The tokenizer expects a string input, so the text should be passed as a string, not a list or tokens.
4fill in blank
hardFill both blanks to compute the model's prediction and extract the predicted label.
Prompt Engineering / GenAI
outputs = model(**[1]) prediction = outputs.logits.[2](dim=1).argmax()
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing raw input_ids instead of the full inputs dictionary.
Using sigmoid instead of softmax for multi-class classification.
✗ Incorrect
The model expects the tokenized inputs as keyword arguments, and softmax converts logits to probabilities before argmax finds the predicted label.
5fill in blank
hardFill all three blanks to create a function that detects hallucination by thresholding the prediction score.
Prompt Engineering / GenAI
def detect_hallucination(text): inputs = tokenizer(text, return_tensors='pt') outputs = model(**[1]) probs = torch.nn.functional.[2](outputs.logits, dim=1) score = probs[0][[3]].item() return score > 0.5
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using sigmoid instead of softmax for multi-class outputs.
Indexing the wrong class probability.
Passing raw text instead of tokenized inputs to the model.
✗ Incorrect
The function uses the tokenized inputs, applies softmax to get probabilities, and checks the probability of class 1 to decide if hallucination is detected.