0
0
Prompt Engineering / GenAIml~20 mins

When to fine-tune vs prompt engineer in Prompt Engineering / GenAI - Practice Questions

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Fine-tuning vs Prompt Engineering Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Choosing between fine-tuning and prompt engineering

You have a general language model and want it to perform well on a specific task with limited data. Which approach is best to start with?

AFine-tune the model immediately using the limited data to specialize it.
BIgnore the task and use the model as is without any changes.
CTrain a new model from scratch on the limited data.
DUse prompt engineering to guide the model without changing its weights.
Attempts:
2 left
💡 Hint

Think about the cost and data needed for fine-tuning versus prompt engineering.

🧠 Conceptual
intermediate
2:00remaining
When is fine-tuning preferred over prompt engineering?

Which situation best justifies fine-tuning a large language model instead of relying on prompt engineering?

AYou want to use the model for many different unrelated tasks.
BYou have a large, high-quality dataset specific to your task and need consistent output.
CYou want to save computational resources and avoid retraining.
DYou want to quickly test different instructions without changing the model.
Attempts:
2 left
💡 Hint

Consider when changing the model weights is beneficial.

Metrics
advanced
2:00remaining
Evaluating fine-tuning vs prompt engineering performance

You fine-tune a model and also try prompt engineering on the base model. You measure accuracy on a test set. Which metric result indicates fine-tuning improved performance?

Prompt Engineering / GenAI
base_accuracy = 0.75
fine_tuned_accuracy = 0.82
prompt_engineered_accuracy = 0.78
AFine-tuning improved accuracy by 7% over base and 4% over prompt engineering.
BPrompt engineering improved accuracy more than fine-tuning.
CBase model accuracy is highest, so no improvement.
DFine-tuning decreased accuracy compared to prompt engineering.
Attempts:
2 left
💡 Hint

Compare the accuracy numbers carefully.

🔧 Debug
advanced
2:00remaining
Why did fine-tuning not improve model performance?

You fine-tuned a model on a small dataset but test accuracy dropped. What is the most likely cause?

APrompt engineering was not used before fine-tuning.
BThe model architecture was changed accidentally.
CThe dataset was too small causing overfitting during fine-tuning.
DThe test set was too large compared to training.
Attempts:
2 left
💡 Hint

Think about what happens when fine-tuning with little data.

Model Choice
expert
3:00remaining
Selecting approach for a multi-domain chatbot

You want to build a chatbot that handles many topics without retraining often. Which approach is best?

AUse prompt engineering with a single large base model to adapt responses dynamically.
BFine-tune separate models for each domain and switch between them.
CTrain a new model from scratch on combined domain data.
DUse a rule-based system instead of a language model.
Attempts:
2 left
💡 Hint

Consider flexibility and maintenance effort for many topics.