0
0
Prompt Engineering / GenAIml~20 mins

Cost optimization in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Cost Optimization Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Cost Components in Machine Learning

Which of the following is NOT typically considered a direct cost component when optimizing machine learning model deployment?

ADeveloper salaries for model development
BCompute resources used during model training
CElectricity costs for powering data center cooling
DData storage costs for training datasets
Attempts:
2 left
💡 Hint

Think about costs directly tied to running and maintaining the model versus indirect overhead.

Model Choice
intermediate
2:00remaining
Choosing a Model for Cost Efficiency

You need to deploy a model on edge devices with limited compute and power. Which model type is the most cost-efficient choice?

ASmall convolutional neural network with quantization applied
BUncompressed recurrent neural network with high precision weights
CEnsemble of multiple deep neural networks
DLarge transformer-based model with billions of parameters
Attempts:
2 left
💡 Hint

Consider model size and computational requirements for edge deployment.

Metrics
advanced
2:00remaining
Evaluating Cost-Performance Tradeoff

Given two models with the following metrics:

  • Model A: Accuracy 92%, Inference cost $0.10 per 1000 predictions
  • Model B: Accuracy 90%, Inference cost $0.02 per 1000 predictions

Which metric best helps decide the cost-effectiveness of these models?

AInference cost alone
BAccuracy divided by inference cost
CAccuracy alone
DTraining time
Attempts:
2 left
💡 Hint

Think about combining accuracy and cost into one measure.

🔧 Debug
advanced
2:00remaining
Identifying Cost Bottleneck in Model Training Code

Consider this Python snippet for training a model:

for epoch in range(10):
    for batch in data_loader:
        outputs = model(batch)
        loss = loss_fn(outputs, batch.labels)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        time.sleep(5)

What is the main cause of unnecessary cost increase?

ANot using GPU acceleration
BMissing learning rate scheduler
CThe time.sleep(5) call inside the training loop
DNot shuffling data_loader
Attempts:
2 left
💡 Hint

Look for code that delays training without benefit.

Hyperparameter
expert
3:00remaining
Optimizing Hyperparameters for Cost Reduction

You want to reduce training cost by adjusting batch size and learning rate. Which combination is most likely to reduce cost without hurting model convergence?

ADecrease batch size and decrease learning rate
BDecrease batch size and increase learning rate
CIncrease batch size and decrease learning rate
DIncrease batch size and increase learning rate
Attempts:
2 left
💡 Hint

Think about how batch size affects training speed and learning rate affects convergence.