Imagine an AI model trained mostly on images of cats with white fur. What is the most likely effect on the AI's ability to recognize cats with black fur?
Think about what the AI learned from the examples it saw most often.
If the training data mostly shows white cats, the AI learns features common to white cats. It may not learn features of black cats well, so it struggles to recognize them.
Consider a classification model trained on 95% class A and 5% class B data. After training, the model predicts all inputs as class A. What is the approximate accuracy on the training set?
total_samples = 1000 class_A_samples = 950 class_B_samples = 50 correct_predictions = 950 # model predicts all as class A accuracy = correct_predictions / total_samples print(f"Accuracy: {accuracy:.2f}")
How many samples are correctly predicted if the model always predicts class A?
The model correctly predicts all 950 class A samples but misses all 50 class B samples. Accuracy = 950/1000 = 0.95.
You want to improve an AI model's ability to generalize to new data. Which training data strategy is best?
Think about what helps the model learn to handle different situations.
A large and diverse dataset helps the model learn many patterns and generalize well to new data. Small or narrow data limits learning, and unrelated data confuses the model.
An AI model trained on data with 90% negative and 10% positive cases achieves 90% accuracy. However, its precision is 0.5 and recall is 0.1 for the positive class. What does this tell you about the model's behavior?
Recall measures how many actual positives are found. Precision measures how many predicted positives are correct.
Low recall (0.1) means the model finds only 10% of actual positives. Precision 0.5 means half of predicted positives are correct. So the model misses many positives and is unreliable for that class.
Given this training loop snippet, why does the model's loss not decrease?
for epoch in range(5):
for x, y in train_loader:
optimizer.zero_grad()
output = model(x)
loss = loss_fn(output, y)
loss.backward()
optimizer.step()Check how functions are called in Python.
loss.backward is a function and must be called with parentheses: loss.backward(). Without parentheses, gradients are not computed, so optimizer.step() does nothing.