Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to create an adversarial example by adding small noise to the input.
Prompt Engineering / GenAI
adversarial_input = original_input + [1] Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'labels' instead of noise to modify the input.
Adding the model or loss instead of noise.
✗ Incorrect
Adding small noise to the original input creates an adversarial example that can fool the model.
2fill in blank
mediumComplete the code to calculate the loss used for adversarial training.
Prompt Engineering / GenAI
loss = loss_function([1], predictions) Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using adversarial input or model output instead of labels.
Confusing inputs with labels.
✗ Incorrect
The loss compares the model's predictions to the original labels to measure error.
3fill in blank
hardFix the error in the code to generate adversarial noise using gradient sign method.
Prompt Engineering / GenAI
noise = epsilon * [1](loss, input, retain_graph=True)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using backward() instead of accessing the gradient.
Calling zero_grad() or detach() incorrectly.
✗ Incorrect
The gradient of the loss with respect to the input is used to create adversarial noise.
4fill in blank
hardFill both blanks to create a dictionary of adversarial examples filtered by confidence score.
Prompt Engineering / GenAI
adv_examples = {input: output for input, output in dataset.items() if output [1] threshold and confidence_score(input) [2] 0.8} Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using '<' instead of '>' for output filtering.
Using '<' instead of '>=' for confidence filtering.
✗ Incorrect
We select outputs greater than the threshold and confidence scores greater or equal to 0.8.
5fill in blank
hardFill all three blanks to implement adversarial training step updating model parameters.
Prompt Engineering / GenAI
optimizer.[1]() loss = loss_function(model([2]), labels) loss.[3]()
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Not clearing gradients before backward pass.
Using original input instead of adversarial input.
Calling backward on loss incorrectly.
✗ Incorrect
We clear gradients, compute loss on adversarial input, then backpropagate to update model.