Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to measure the latency of a function call using time module.
Prompt Engineering / GenAI
import time start = time.[1]() result = my_function() end = time.time() latency = end - start print(f"Latency: {latency} seconds")
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using time.sleep() instead of a timer function.
Using time.time() which is less precise.
✗ Incorrect
time.perf_counter() gives the most precise timer for measuring short durations like latency.
2fill in blank
mediumComplete the code to batch process inputs to reduce latency in model inference.
Prompt Engineering / GenAI
batch_size = [1] inputs = get_inputs() batched_inputs = [inputs[i:i+batch_size] for i in range(0, len(inputs), batch_size)]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using batch size 1 which does not improve latency.
Using batch size 0 which causes errors.
✗ Incorrect
A batch size of 32 is a common choice to balance throughput and latency in inference.
3fill in blank
hardFix the error in the code to asynchronously run multiple inference calls to reduce latency.
Prompt Engineering / GenAI
import asyncio async def infer_async(input): return model.predict(input) async def main(): tasks = [infer_async(i) for i in inputs] results = await asyncio.[1](tasks) print(results) asyncio.run(main())
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using asyncio.wait which returns futures, not results directly.
Using asyncio.run inside async function causing errors.
✗ Incorrect
asyncio.gather runs multiple async tasks concurrently and collects their results.
4fill in blank
hardFill both blanks to create a dictionary comprehension that filters features with latency less than threshold.
Prompt Engineering / GenAI
latency_dict = {feature: latency for feature, latency in features_latency.items() if latency [1] [2] Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using '>' instead of '<' which filters wrong features.
Using too low or too high threshold values.
✗ Incorrect
We want features with latency less than 0.5 seconds, so use '<' and 0.5.
5fill in blank
hardFill all three blanks to create a dictionary comprehension that maps model names to their average latency if latency is above threshold.
Prompt Engineering / GenAI
avg_latency = {model[1]: sum(times)/len(times) for model, times in latency_data.items() if sum(times)/len(times) [2] [3] Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using .upper() instead of .lower() causing inconsistent keys.
Using '<' instead of '>' in the condition.
✗ Incorrect
We convert model names to lowercase, and filter average latency greater than 0.1 seconds.