0
0
Agentic AIml~20 mins

Handling retrieval failures gracefully in Agentic AI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Retrieval Resilience Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
Why is it important to handle retrieval failures gracefully in AI systems?

Imagine an AI assistant that fetches information from a database. What is the main reason to handle retrieval failures gracefully?

ATo provide a smooth user experience even when data is missing or delayed
BTo ensure the system crashes quickly so developers notice the problem
CTo ignore errors and continue without informing the user
DTo always return empty results without explanation
Attempts:
2 left
💡 Hint

Think about how users feel when an app suddenly stops working or shows confusing errors.

Predict Output
intermediate
1:30remaining
What is the output when retrieval fails and fallback is used?

Consider this Python code snippet simulating a retrieval with fallback:

def retrieve_data(key):
    data_store = {'a': 1, 'b': 2}
    try:
        return data_store[key]
    except KeyError:
        return 'default_value'

result = retrieve_data('c')
print(result)

What will be printed?

A'default_value'
B2
CKeyError
D1
Attempts:
2 left
💡 Hint

What happens if the key is not found in the dictionary?

Model Choice
advanced
2:00remaining
Which model architecture is best suited to handle missing data during retrieval?

You want an AI model that can still make reasonable predictions even if some input data is missing or incomplete. Which model type is best?

AStandard linear regression without imputation
BFeedforward neural network without dropout or masking
CRecurrent neural network with masking for missing inputs
DDecision tree that requires complete data
Attempts:
2 left
💡 Hint

Think about models that can ignore or skip missing parts of the input.

Metrics
advanced
2:00remaining
Which metric best reflects graceful handling of retrieval failures in AI predictions?

An AI system sometimes returns fallback predictions when retrieval fails. Which metric helps measure if these fallbacks keep predictions reliable?

AAccuracy on only complete data samples
BMean squared error including fallback predictions
CTraining loss before deployment
DNumber of retrieval failures logged
Attempts:
2 left
💡 Hint

Consider a metric that measures prediction quality including fallback cases.

🔧 Debug
expert
2:30remaining
Why does this retrieval fallback code cause a runtime error?

Review this Python code snippet:

def get_data(key, fallback=None):
    data = {'x': 10, 'y': 20}
    try:
        return data[key]
    except KeyError:
        return fallback.upper()

result = get_data('z', fallback=None)
print(result)

What error occurs and why?

ATypeError because data[key] returns an int but fallback is None
BNo error, prints None
CKeyError because 'z' is not in data and fallback is ignored
DAttributeError because fallback is None and None has no 'upper' method
Attempts:
2 left
💡 Hint

What happens if fallback is None and you call a string method on it?