Pre-trained models save time by starting with knowledge learned from lots of data. This helps build new models faster and with less data.
Why pre-trained models accelerate development in PyTorch
import torchvision.models as models model = models.resnet18(pretrained=True)
Use pretrained=True to load a model already trained on a large dataset.
You can then fine-tune this model on your own data to adapt it to your task.
import torchvision.models as models # Load a pre-trained ResNet18 model model = models.resnet18(pretrained=True)
import torchvision.models as models # Load a pre-trained VGG16 model model = models.vgg16(pretrained=True)
import torchvision.models as models # Load a model without pre-training model = models.resnet18(pretrained=False)
This code loads a pre-trained ResNet18 model, changes its last layer for 2 classes, and runs one training step on dummy data. It shows how pre-trained models can be quickly adapted.
import torch import torchvision.models as models import torch.nn as nn import torch.optim as optim # Load pre-trained ResNet18 model = models.resnet18(pretrained=True) # Replace the last layer to match 2 classes model.fc = nn.Linear(model.fc.in_features, 2) # Dummy input and labels inputs = torch.randn(4, 3, 224, 224) labels = torch.tensor([0, 1, 0, 1]) # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.001) # Forward pass outputs = model(inputs) loss = criterion(outputs, labels) # Backward and optimize loss.backward() optimizer.step() print(f"Loss after one training step: {loss.item():.4f}")
Pre-trained models have learned useful features from large datasets like ImageNet.
Fine-tuning means adjusting the model slightly to fit your specific task.
Using pre-trained models reduces the need for large labeled datasets and long training times.
Pre-trained models speed up AI development by starting with learned knowledge.
They help when you have limited data or computing power.
Fine-tuning adapts these models to new tasks quickly and effectively.