Challenge - 5 Problems
Privacy Protector
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate2:00remaining
Understanding Differential Privacy
Which of the following best describes the main goal of differential privacy in machine learning?
Attempts:
2 left
💡 Hint
Think about protecting individual data privacy when training models.
✗ Incorrect
Differential privacy aims to protect individual data by making it hard to tell if any one person's data was included in the training set.
❓ Predict Output
intermediate2:00remaining
Output of Privacy-Preserving Noise Addition
What is the output of the following Python code that adds Laplace noise to a data point for privacy?
ML Python
import numpy as np np.random.seed(0) data_point = 10 noise = np.random.laplace(loc=0.0, scale=1.0) private_data = data_point + noise print(round(private_data, 2))
Attempts:
2 left
💡 Hint
Run the code to see the exact noise value added.
✗ Incorrect
The Laplace noise generated with seed 0 and scale 1.0 added to 10 results in approximately 11.43 after rounding.
❓ Model Choice
advanced2:00remaining
Choosing a Privacy-Preserving Model Technique
You want to train a machine learning model on sensitive health data while ensuring privacy. Which technique is best suited for this?
Attempts:
2 left
💡 Hint
Think about methods that keep data on user devices.
✗ Incorrect
Federated learning trains models locally and only shares updates, protecting raw data privacy.
❓ Metrics
advanced2:00remaining
Evaluating Privacy-Utility Tradeoff
In a privacy-preserving model, which metric combination best reflects a good balance between privacy and model usefulness?
Attempts:
2 left
💡 Hint
Lower epsilon means stronger privacy.
✗ Incorrect
A low epsilon means stronger privacy, and high accuracy means the model is still useful.
🔧 Debug
expert3:00remaining
Debugging Privacy Leakage in Model Training
You trained a model with differential privacy but found it still leaks some private info. Which code snippet below is the cause?
ML Python
def train_model(data, epsilon): # Missing noise addition step model = SomeModel() model.fit(data) return model
Attempts:
2 left
💡 Hint
Differential privacy requires adding noise during training.
✗ Incorrect
Without adding noise, differential privacy is not applied, so private info can leak.