Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to load a pretrained model for factual consistency checking.
Prompt Engineering / GenAI
from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained([1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing a language model not fine-tuned for classification.
Using a base model without a classification head.
✗ Incorrect
The 'facebook/bart-large-mnli' model is commonly used for factual consistency checking tasks.
2fill in blank
mediumComplete the code to tokenize input text for factual consistency checking.
Prompt Engineering / GenAI
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli") inputs = tokenizer([1], return_tensors="pt", truncation=True)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Tokenizing only the claim or only the document.
Passing a single string instead of a list.
✗ Incorrect
For factual consistency, we tokenize both the claim and the document as a list of strings.
3fill in blank
hardFix the error in the code to get model predictions for factual consistency.
Prompt Engineering / GenAI
outputs = model(**[1]) predictions = outputs.logits.argmax(dim=1)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing the tokenizer object instead of tokenized inputs.
Passing the model or outputs variable.
✗ Incorrect
The model expects tokenized inputs, so we pass 'inputs' as keyword arguments.
4fill in blank
hardFill both blanks to create a function that checks if a claim is factually consistent with a document.
Prompt Engineering / GenAI
def check_consistency(claim, document): inputs = tokenizer([claim, document], return_tensors=[1], truncation=True) outputs = model(**inputs) pred = outputs.logits.argmax(dim=[2]).item() return pred == 2
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using TensorFlow tensors when model is PyTorch.
Argmax over wrong dimension.
✗ Incorrect
We use 'pt' for PyTorch tensors and argmax over dimension 1 for batch dimension.
5fill in blank
hardFill all three blanks to compute accuracy of factual consistency predictions.
Prompt Engineering / GenAI
correct = 0 for claim, doc, label in data: inputs = tokenizer([claim, doc], return_tensors=[1], truncation=True) outputs = model(**inputs) pred = outputs.logits.argmax(dim=[2]).item() if pred == label: correct += [3] accuracy = correct / len(data)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using wrong tensor type.
Argmax over wrong dimension.
Incrementing correct by 0 or wrong value.
✗ Incorrect
Use PyTorch tensors, argmax over dimension 1, and increment correct by 1 for each correct prediction.