0
0
Computer Visionml~5 mins

Model evaluation best practices in Computer Vision

Choose your learning style9 modes available
Introduction
We check how well a model works to trust its decisions and improve it if needed.
After training a model to see if it recognizes images correctly.
Before using a model in a real app to avoid mistakes.
When comparing two models to pick the better one.
To find out if the model is overfitting or underfitting.
To measure progress during model improvements.
Syntax
Computer Vision
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score

# Example: Calculate accuracy
accuracy = accuracy_score(true_labels, predicted_labels)

# Other metrics
precision = precision_score(true_labels, predicted_labels, average='weighted')
recall = recall_score(true_labels, predicted_labels, average='weighted')
f1 = f1_score(true_labels, predicted_labels, average='weighted')
Use the right metric for your task; accuracy is common but not always best.
For multi-class problems, use 'average' parameter to combine scores.
Examples
Calculates accuracy for a small example of true and predicted labels.
Computer Vision
accuracy = accuracy_score([0,1,1,0], [0,1,0,0])
print(f"Accuracy: {accuracy}")
Calculates weighted precision for multi-class predictions.
Computer Vision
precision = precision_score([0,1,1,0], [0,1,0,0], average='weighted')
print(f"Precision: {precision}")
Calculates recall and F1 score to understand model balance between precision and recall.
Computer Vision
recall = recall_score([0,1,1,0], [0,1,0,0], average='weighted')
f1 = f1_score([0,1,1,0], [0,1,0,0], average='weighted')
print(f"Recall: {recall}, F1 Score: {f1}")
Sample Model
This program trains a Random Forest model on digit images and evaluates it using common metrics to check how well it predicts.
Computer Vision
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score

# Load sample image data (digits)
digits = load_digits()
X = digits.images.reshape((len(digits.images), -1))
y = digits.target

# Split data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

# Train a simple model
model = RandomForestClassifier(random_state=42)
model.fit(X_train, y_train)

# Predict on test data
predictions = model.predict(X_test)

# Evaluate model
accuracy = accuracy_score(y_test, predictions)
precision = precision_score(y_test, predictions, average='weighted')
recall = recall_score(y_test, predictions, average='weighted')
f1 = f1_score(y_test, predictions, average='weighted')

print(f"Accuracy: {accuracy:.3f}")
print(f"Precision: {precision:.3f}")
print(f"Recall: {recall:.3f}")
print(f"F1 Score: {f1:.3f}")
OutputSuccess
Important Notes
Always split your data into training and testing sets to get honest evaluation.
Use multiple metrics to get a full picture of model performance.
Beware of imbalanced data; accuracy alone can be misleading.
Summary
Model evaluation tells us how good our model is at its task.
Use train/test split and multiple metrics like accuracy, precision, recall, and F1 score.
Good evaluation helps improve models and avoid mistakes in real use.