Pre-trained models save time because they have already learned useful features from large datasets. This means you don't have to start learning from scratch, making your work faster and easier.
0
0
Why pre-trained models save time in Computer Vision
Introduction
When you want to quickly build an image recognition app without collecting lots of data.
When you have limited computing power and want to avoid long training times.
When you want to improve your model's accuracy by using knowledge learned from big datasets.
When you need a good starting point for your own custom model.
When you want to experiment with different tasks without training a new model each time.
Syntax
Computer Vision
from torchvision import models model = models.resnet18(weights='IMAGENET1K_V1')
This example loads a pre-trained ResNet18 model from PyTorch's torchvision library.
Setting weights='IMAGENET1K_V1' loads the model weights learned on a large dataset.
Examples
Loads the VGG16 model pre-trained on ImageNet dataset using TensorFlow Keras.
Computer Vision
from tensorflow.keras.applications import VGG16 model = VGG16(weights='imagenet')
Loads the AlexNet model with pre-trained weights from PyTorch.
Computer Vision
import torchvision.models as models model = models.alexnet(weights='IMAGENET1K_V1')
Sample Model
This code loads a pre-trained ResNet18 model and uses it to predict the class of a dog image. It shows how pre-trained models can quickly give accurate predictions without training.
Computer Vision
import torch from torchvision import models, transforms from PIL import Image import requests # Load pre-trained ResNet18 model model = models.resnet18(weights=models.ResNet18_Weights.IMAGENET1K_V1) model.eval() # Image preprocessing preprocess = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) # Download an example image url = 'https://upload.wikimedia.org/wikipedia/commons/9/9a/Pug_600.jpg' image = Image.open(requests.get(url, stream=True).raw) # Preprocess the image input_tensor = preprocess(image) input_batch = input_tensor.unsqueeze(0) # create a mini-batch # Run the model with torch.no_grad(): output = model(input_batch) # Load labels labels_url = 'https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt' labels = requests.get(labels_url).text.splitlines() # Get top prediction probabilities = torch.nn.functional.softmax(output[0], dim=0) confidence, predicted_idx = torch.max(probabilities, 0) print(f'Predicted label: {labels[predicted_idx]}') print(f'Confidence: {confidence.item():.4f}')
OutputSuccess
Important Notes
Pre-trained models are trained on large datasets like ImageNet with millions of images.
You can fine-tune pre-trained models on your own smaller dataset to improve results.
Summary
Pre-trained models save time by reusing learned features from big datasets.
They help you get good results quickly without long training.
You can use them as a starting point for your own projects.