0
0
Prompt Engineering / GenAIml~20 mins

Red teaming and adversarial testing in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Red Teaming Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Purpose of Red Teaming in AI

What is the main purpose of red teaming in the context of AI systems?

ATo increase the size of the training dataset by generating synthetic data
BTo improve the speed of AI model training by optimizing hardware usage
CTo identify vulnerabilities and weaknesses by simulating attacks or adversarial inputs
DTo reduce the model size for deployment on mobile devices
Attempts:
2 left
💡 Hint

Think about how red teaming helps find problems before they happen.

Predict Output
intermediate
2:00remaining
Output of Adversarial Example Generation Code

What will be the output of the following Python code snippet that generates an adversarial example for a simple model?

Prompt Engineering / GenAI
import numpy as np

def simple_model(x):
    return x * 2

original_input = np.array([1.0, 2.0, 3.0])
adversarial_perturbation = np.array([0.1, -0.2, 0.3])
adversarial_input = original_input + adversarial_perturbation
output = simple_model(adversarial_input)
print(output)
A[2.0 4.0 6.0]
B[0.2 0.4 0.6]
C[1.1 1.8 3.3]
D[2.2 3.6 6.6]
Attempts:
2 left
💡 Hint

Remember the model doubles the input values.

Model Choice
advanced
2:00remaining
Best Model Type for Adversarial Robustness

Which type of model architecture is generally considered more robust against adversarial attacks?

AShallow linear models without nonlinearities
BModels trained with adversarial training techniques
CDeep neural networks without any regularization
DUntrained random weight neural networks
Attempts:
2 left
💡 Hint

Think about training methods that expose the model to attacks during learning.

Metrics
advanced
2:00remaining
Metric to Evaluate Adversarial Robustness

Which metric is most appropriate to evaluate the adversarial robustness of a classification model?

AAccuracy on adversarially perturbed test data
BStandard accuracy on clean test data
CTraining loss on the training dataset
DModel size in megabytes
Attempts:
2 left
💡 Hint

Robustness means performance when under attack.

🔧 Debug
expert
2:00remaining
Debugging Adversarial Attack Code

Consider this Python code snippet intended to create an adversarial example by adding a small perturbation to an input tensor. What error will this code raise?

Prompt Engineering / GenAI
import torch

input_tensor = torch.tensor([1.0, 2.0, 3.0])
perturbation = torch.tensor([0.1, 0.1])
adversarial_input = input_tensor + perturbation
print(adversarial_input)
ARuntimeError due to size mismatch in tensor addition
BTypeError because tensors must be converted to numpy arrays first
CNo error, outputs tensor with added perturbation
DNameError because 'torch' is not imported
Attempts:
2 left
💡 Hint

Check if the tensors have the same shape before adding.