0
0
Prompt Engineering / GenAIml~10 mins

Why advanced RAG improves answer quality in Prompt Engineering / GenAI - Test Your Understanding

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to import the Retriever class used in RAG models.

Prompt Engineering / GenAI
from transformers import [1]
Drag options to blanks, or click blank then click option'
ARetriever
BRagRetriever
CTokenizer
DRagModel
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing RagModel instead of RagRetriever
Using Tokenizer which is unrelated to retrieval
Using a generic Retriever class not specific to RAG
2fill in blank
medium

Complete the code to initialize a RAG model with a retriever.

Prompt Engineering / GenAI
from transformers import RagTokenizer, RagSequenceForGeneration, RagRetriever

tokenizer = RagTokenizer.from_pretrained('facebook/rag-sequence-nq')
retriever = RagRetriever.from_pretrained('facebook/rag-sequence-nq')
model = RagSequenceForGeneration.from_pretrained('facebook/rag-sequence-nq', retriever=[1])
Drag options to blanks, or click blank then click option'
Aretriever
Bmodel
Ctokenizer
DNone
Attempts:
3 left
💡 Hint
Common Mistakes
Passing tokenizer instead of retriever
Passing None disables retrieval
Passing model itself causes error
3fill in blank
hard

Fix the error in this code that generates an answer using the RAG model.

Prompt Engineering / GenAI
input_dict = tokenizer(question, return_tensors='pt')
outputs = model.generate(input_ids=[1], num_beams=5, max_length=50)
answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
Drag options to blanks, or click blank then click option'
Ainput_dict['labels']
Binput_dict['attention_mask']
Cinput_dict['token_type_ids']
Dinput_dict['input_ids']
Attempts:
3 left
💡 Hint
Common Mistakes
Using attention_mask instead of input_ids
Using token_type_ids which is optional
Using labels which is for training
4fill in blank
hard

Fill both blanks to create a dictionary comprehension that filters retrieved documents with score above 0.5.

Prompt Engineering / GenAI
filtered_docs = {doc: score for doc, score in retrieved_docs.items() if score [1] 0.5 and len(doc) [2] 0}
Drag options to blanks, or click blank then click option'
A>
B<
C!=
D==
Attempts:
3 left
💡 Hint
Common Mistakes
Using '<' instead of '>' for score comparison
Using '==' instead of '!=' for length check
Using '<' for length which excludes valid docs
5fill in blank
hard

Fill all three blanks to build a dictionary of document texts and their scores filtered by score > 0.7.

Prompt Engineering / GenAI
high_score_docs = [1]: [2] for [3], [2] in docs_with_scores.items() if [2] > 0.7
Drag options to blanks, or click blank then click option'
Atext
Bscore
Cdoc
Ditem
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'text' instead of 'doc' for keys
Using 'item' which is undefined
Mixing variable names inconsistently