0
0
Prompt Engineering / GenAIml~20 mins

Image-to-image transformation in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Image-to-Image Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
What is the main goal of image-to-image transformation models?
Choose the best description of what image-to-image transformation models do.
AClassify images into categories based on their content.
BDetect objects and draw bounding boxes on images.
CGenerate images from random noise without any input image.
DConvert an input image into another image with desired changes, like style or content.
Attempts:
2 left
💡 Hint
Think about models that change one image into another.
Model Choice
intermediate
2:00remaining
Which model architecture is commonly used for image-to-image transformation tasks?
Select the model architecture best suited for image-to-image transformation.
AConvolutional Autoencoder with skip connections (U-Net)
BRecurrent Neural Network (RNN)
CTransformer for text generation
DFully connected feedforward network
Attempts:
2 left
💡 Hint
Look for a model that preserves spatial details and can reconstruct images.
Metrics
advanced
2:00remaining
Which metric best measures the quality of generated images in image-to-image transformation?
Choose the metric that evaluates how close the generated image is to the target image in terms of pixel-level similarity.
AMean Squared Error (MSE)
BAccuracy
CPerplexity
DBLEU score
Attempts:
2 left
💡 Hint
Think about a metric that calculates average squared differences between pixels.
🔧 Debug
advanced
2:00remaining
What error will this image-to-image transformation training code raise?
Consider this Python snippet for training a model. What error occurs when running it? ```python import torch from torch import nn model = nn.Sequential( nn.Conv2d(3, 64, 3, padding=1), nn.ReLU(), nn.Conv2d(64, 3, 3, padding=1) ) input_image = torch.randn(1, 3, 256, 256) output = model(input_image) loss_fn = nn.MSELoss() # Target image has wrong shape target_image = torch.randn(1, 3, 128, 128) loss = loss_fn(output, target_image) ```
ATypeError: loss_fn() missing 1 required positional argument
BSyntaxError: invalid syntax in model definition
CRuntimeError: The size of tensor a (256) must match the size of tensor b (128) at non-singleton dimension 2
DNo error, code runs successfully
Attempts:
2 left
💡 Hint
Check if output and target images have the same shape before computing loss.
Hyperparameter
expert
2:00remaining
Which hyperparameter adjustment is most likely to improve image sharpness in a GAN-based image-to-image model?
You notice generated images are blurry. Which change is most effective to improve sharpness?
ADecrease the learning rate of the generator
BIncrease the discriminator capacity or depth
CReduce the batch size drastically
DRemove batch normalization layers from the generator
Attempts:
2 left
💡 Hint
Sharper images often come from a stronger discriminator that pushes the generator harder.