Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to load a pre-trained NLP model for bias analysis.
NLP
from transformers import pipeline nlp = pipeline([1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing 'text-generation' which creates text but doesn't analyze bias.
Using 'translation' which changes language but doesn't detect bias.
✗ Incorrect
The sentiment-analysis pipeline is commonly used to analyze bias in text by detecting positive or negative sentiment.
2fill in blank
mediumComplete the code to calculate bias metric using word embeddings.
NLP
import numpy as np bias_score = np.dot(embedding1, [1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using an unrelated embedding that doesn't correspond to the comparison concept.
Confusing the embeddings and using the same embedding twice.
✗ Incorrect
The dot product between two embeddings measures similarity, which helps quantify bias.
3fill in blank
hardFix the error in the code to remove gender bias from word embeddings.
NLP
def debias_embedding(embedding, gender_direction): corrected = embedding - embedding[1]gender_direction) * gender_direction return corrected
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using multiplication (*) instead of dot product for projection.
Using addition (+) which increases bias instead of removing it.
✗ Incorrect
The dot product (embedding.dot(gender_direction)) projects the embedding onto the gender direction for removal.
4fill in blank
hardFill both blanks to create a dictionary comprehension that filters biased words.
NLP
biased_words = {word: score for word, score in scores.items() if score [1] threshold and len(word) [2] 3} Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using '<' for score which selects low bias scores.
Using '<=' for length which includes very short words.
✗ Incorrect
We filter words with scores greater than the threshold and length greater or equal to 3 to focus on meaningful biased words.
5fill in blank
hardFill all three blanks to create a fairness evaluation function.
NLP
def evaluate_fairness(predictions, labels): correct = sum(1 for p, l in zip(predictions, labels) if p [1] l) total = len(labels) fairness_score = correct [2] total return fairness_score [3] 1.0
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using assignment '=' instead of comparison '=='.
Using multiplication '*' instead of division '/' for accuracy.
Checking fairness_score <= 1.0 which is incorrect for minimum fairness.
✗ Incorrect
We count correct predictions (p == l), calculate accuracy (correct / total), and check if fairness_score is at least 1.0 (>= 1.0).