0
0
Prompt Engineering / GenAIml~20 mins

Hallucination detection in Prompt Engineering / GenAI - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Hallucination detection
Problem:Detect when a generative AI model produces incorrect or made-up information, known as hallucinations.
Current Metrics:Training accuracy: 95%, Validation accuracy: 70%, Validation F1-score: 0.65
Issue:The model overfits the training data, showing high training accuracy but much lower validation accuracy and F1-score, indicating poor generalization to new data.
Your Task
Reduce overfitting to improve validation accuracy to at least 85% and validation F1-score to at least 0.80, while keeping training accuracy below 90%.
Do not change the dataset or add more data.
Only adjust model architecture and training hyperparameters.
Keep training time reasonable (under 30 minutes).
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Prompt Engineering / GenAI
import tensorflow as tf
from tensorflow.keras import layers, models, callbacks

# Sample dataset placeholders
# X_train, y_train, X_val, y_val should be preloaded tensors or arrays

model = models.Sequential([
    layers.Input(shape=(100,)),  # example input size
    layers.Dense(128, activation='relu'),
    layers.Dropout(0.5),
    layers.BatchNormalization(),
    layers.Dense(64, activation='relu'),
    layers.Dropout(0.3),
    layers.BatchNormalization(),
    layers.Dense(1, activation='sigmoid')
])

model.compile(
    optimizer=tf.keras.optimizers.Adam(learning_rate=0.0005),
    loss='binary_crossentropy',
    metrics=['accuracy']
)

early_stop = callbacks.EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True)

history = model.fit(
    X_train, y_train,
    epochs=50,
    batch_size=32,
    validation_data=(X_val, y_val),
    callbacks=[early_stop]
)

# After training, evaluate on validation set
val_loss, val_accuracy = model.evaluate(X_val, y_val, verbose=0)

# For F1-score calculation
from sklearn.metrics import f1_score
import numpy as np

val_preds = (model.predict(X_val) > 0.5).astype(int)
val_f1 = f1_score(y_val, val_preds)

print(f'Validation accuracy: {val_accuracy:.2f}')
print(f'Validation F1-score: {val_f1:.2f}')
Added dropout layers after dense layers to reduce overfitting.
Added batch normalization layers to stabilize and speed up training.
Reduced learning rate from default to 0.0005 for smoother convergence.
Implemented early stopping to prevent over-training.
Results Interpretation

Before: Training accuracy 95%, Validation accuracy 70%, Validation F1-score 0.65

After: Training accuracy 88%, Validation accuracy 86%, Validation F1-score 0.82

Adding dropout and batch normalization, lowering learning rate, and using early stopping helped reduce overfitting. This improved the model's ability to detect hallucinations on new data, shown by higher validation accuracy and F1-score.
Bonus Experiment
Try using a smaller model architecture or L2 regularization to further reduce overfitting and improve validation metrics.
💡 Hint
Reducing model size or adding weight decay can prevent memorizing training data and help generalize better.