Recall & Review
beginner
What is the main purpose of K-fold cross-validation in machine learning?
K-fold cross-validation helps to estimate how well a model will perform on new, unseen data by splitting the data into K parts and testing the model on each part in turn.
Click to reveal answer
beginner
How does K-fold cross-validation work?
The data is divided into K equal parts (folds). The model trains on K-1 folds and tests on the remaining fold. This process repeats K times, each time with a different fold as the test set.
Click to reveal answer
intermediate
Why is K-fold cross-validation better than a single train-test split?
Because it uses all data points for both training and testing, reducing bias and variance in the performance estimate compared to a single split.
Click to reveal answer
beginner
What is a common choice for the number of folds (K) in K-fold cross-validation?
A common choice is K=5 or K=10, balancing between reliable performance estimates and computational cost.
Click to reveal answer
beginner
What metric is typically averaged across folds in K-fold cross-validation?
Performance metrics like accuracy, precision, recall, or mean squared error are averaged across all folds to get a final estimate.
Click to reveal answer
What does each fold represent in K-fold cross-validation?
If K=5, how many times is the model trained and tested during K-fold cross-validation?
Which of the following is NOT a benefit of K-fold cross-validation?
What happens if K equals the number of data points (N) in K-fold cross-validation?
Which metric is commonly averaged across folds in K-fold cross-validation?
Explain how K-fold cross-validation helps in evaluating a machine learning model.
Describe the difference between K-fold cross-validation and a simple train-test split.