0
0
Prompt Engineering / GenAIml~10 mins

Why RAG grounds LLMs in real data in Prompt Engineering / GenAI - Test Your Understanding

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to import the RAG model class from transformers.

Prompt Engineering / GenAI
from transformers import [1]
Drag options to blanks, or click blank then click option'
ARagTokenizer
BRagSequenceForGeneration
CRagRetriever
DRagModel
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing tokenizer or retriever classes instead of the main model.
Confusing RagSequenceForGeneration with RagModel.
2fill in blank
medium

Complete the code to initialize the retriever for RAG using a dataset.

Prompt Engineering / GenAI
retriever = RagRetriever.from_pretrained('facebook/rag-token-base', index_name=[1])
Drag options to blanks, or click blank then click option'
A'exact'
B'custom'
C'legacy'
D'default'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'custom' or 'default' which are not valid index names.
Confusing 'legacy' with the correct retrieval method.
3fill in blank
hard

Fix the error in the code to generate an answer grounded on retrieved documents.

Prompt Engineering / GenAI
outputs = model.generate(input_ids, context_doc_ids=[1])
Drag options to blanks, or click blank then click option'
Aretrieved_docs
Bretrieved_doc_embeds
Cretrieved_doc_ids
Dretrieved_doc_texts
Attempts:
3 left
💡 Hint
Common Mistakes
Passing raw document texts instead of IDs.
Passing embeddings which are not accepted here.
4fill in blank
hard

Fill both blanks to create a RAG model with retriever and generate output.

Prompt Engineering / GenAI
model = RagSequenceForGeneration.from_pretrained('facebook/rag-token-base', retriever=[1])
outputs = model.generate([2])
Drag options to blanks, or click blank then click option'
Aretriever
Binput_ids
Cinput_texts
Dtokenizer
Attempts:
3 left
💡 Hint
Common Mistakes
Passing raw texts instead of token IDs to generate.
Not passing the retriever when creating the model.
5fill in blank
hard

Fill all three blanks to tokenize input, retrieve docs, and generate grounded output.

Prompt Engineering / GenAI
inputs = tokenizer([1], return_tensors='pt')
retrieved_docs = retriever.retrieve([2])
outputs = model.generate(input_ids=inputs['input_ids'], context_doc_ids=[3])
Drag options to blanks, or click blank then click option'
A'What is RAG?'
Binputs['input_ids']
Cretrieved_docs['doc_ids']
Dretrieved_docs['texts']
Attempts:
3 left
💡 Hint
Common Mistakes
Using raw texts instead of token IDs for retrieval or generation.
Mixing up document texts and document IDs.