0
0
Prompt Engineering / GenAIml~20 mins

Pre-training and fine-tuning concept in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Pre-training and Fine-tuning Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding the role of pre-training

Which statement best describes the purpose of pre-training in machine learning models?

APre-training initializes a model by learning general patterns from a large dataset before adapting to a specific task.
BPre-training involves manually labeling data to improve model performance.
CPre-training is the final step where the model is tested on unseen data to check accuracy.
DPre-training is used to reduce the size of the dataset by removing irrelevant samples.
Attempts:
2 left
💡 Hint

Think about how a model learns general knowledge before focusing on a specific problem.

Model Choice
intermediate
2:00remaining
Choosing a model for fine-tuning

You want to build a sentiment analysis tool using a large language model. Which model type is best suited for fine-tuning on your specific dataset?

AA simple linear regression model trained from scratch
BA pre-trained transformer model like BERT or GPT
CA random forest model without any pre-training
DA clustering algorithm like K-means
Attempts:
2 left
💡 Hint

Consider models that have already learned language patterns and can be adapted.

Predict Output
advanced
2:00remaining
Output of fine-tuning training metrics

What will be the output of the training accuracy after fine-tuning this simple model for 3 epochs?

Prompt Engineering / GenAI
import torch
import torch.nn as nn
import torch.optim as optim

class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(2, 2)

    def forward(self, x):
        return self.linear(x)

model = SimpleModel()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

# Dummy data: inputs and labels
inputs = torch.tensor([[1.0, 2.0], [2.0, 1.0], [1.5, 1.5], [3.0, 3.0]])
labels = torch.tensor([0, 1, 0, 1])

for epoch in range(3):
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()

_, predicted = torch.max(outputs, 1)
correct = (predicted == labels).sum().item()
accuracy = correct / labels.size(0)
print(f"Accuracy after 3 epochs: {accuracy:.2f}")
AAccuracy after 3 epochs: 0.50
BAccuracy after 3 epochs: 1.00
CAccuracy after 3 epochs: 0.25
DAccuracy after 3 epochs: 0.75
Attempts:
2 left
💡 Hint

Check the model output and compare predicted labels to true labels after training.

Hyperparameter
advanced
2:00remaining
Effect of learning rate during fine-tuning

During fine-tuning a pre-trained model, what is the typical effect of using a learning rate that is too high?

AThe model will ignore the pre-trained weights and start training from scratch.
BThe model will always converge faster and achieve better accuracy.
CThe model will reduce overfitting by stopping training early.
DThe model may fail to converge and training loss can fluctuate or increase.
Attempts:
2 left
💡 Hint

Think about how big steps in learning affect the model's ability to settle on good solutions.

🔧 Debug
expert
3:00remaining
Identifying the cause of poor fine-tuning results

You fine-tuned a large pre-trained model on a small dataset but the validation accuracy is very low and training loss does not improve. Which issue is most likely causing this?

AThe model architecture is too simple for the task.
BThe dataset is too large, causing overfitting.
CThe learning rate is too high, causing unstable training.
DThe pre-training was done on a similar domain.
Attempts:
2 left
💡 Hint

Consider what happens when training a complex model on limited data with aggressive settings.