0
0
Prompt Engineering / GenAIml~20 mins

Content writing assistance in Prompt Engineering / GenAI - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Content writing assistance
Problem:You want to build a simple AI model that helps generate short, clear content suggestions based on a given topic.
Current Metrics:Training loss: 0.15, Validation loss: 0.45, Training accuracy: 92%, Validation accuracy: 70%
Issue:The model is overfitting: it performs well on training data but poorly on validation data, meaning it does not generalize well to new topics.
Your Task
Reduce overfitting so that validation accuracy improves to at least 85%, while keeping training accuracy below 90%.
You can only modify the model architecture and training parameters.
You cannot change the dataset or add more data.
Hint 1
Hint 2
Hint 3
Solution
Prompt Engineering / GenAI
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.callbacks import EarlyStopping

# Sample data placeholders (replace with actual data)
X_train, y_train = ...  # training features and labels
X_val, y_val = ...      # validation features and labels

model = Sequential([
    Dense(64, activation='relu', input_shape=(X_train.shape[1],)),
    Dropout(0.3),
    Dense(32, activation='relu'),
    Dropout(0.3),
    Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

early_stop = EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True)

history = model.fit(
    X_train, y_train,
    epochs=50,
    batch_size=32,
    validation_data=(X_val, y_val),
    callbacks=[early_stop]
)
Added Dropout layers with 30% rate after dense layers to reduce overfitting.
Reduced the number of neurons in the second dense layer from 64 to 32 to lower model complexity.
Added EarlyStopping callback to stop training when validation loss stops improving.
Results Interpretation

Before: Training accuracy 92%, Validation accuracy 70% (overfitting)

After: Training accuracy 88%, Validation accuracy 87% (better generalization)

Adding dropout and early stopping helps the model generalize better by preventing it from memorizing training data, reducing overfitting.
Bonus Experiment
Try using L2 regularization instead of dropout to reduce overfitting and compare results.
💡 Hint
Add kernel_regularizer=tf.keras.regularizers.l2(0.01) to Dense layers and remove dropout layers.