0
0
TensorFlowml~8 mins

Dense (fully connected) layers in TensorFlow - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Dense (fully connected) layers
Which metric matters for Dense (fully connected) layers and WHY

Dense layers are used in many tasks like classification or regression. The metric you choose depends on the task:

  • Classification: Accuracy, Precision, Recall, and F1-score help understand how well the model predicts classes.
  • Regression: Mean Squared Error (MSE) or Mean Absolute Error (MAE) show how close predictions are to actual values.

For classification, metrics like Precision and Recall are important to balance false positives and false negatives. For regression, error metrics show prediction quality.

Confusion matrix example for Dense layer classification output
      Actual \ Predicted | Positive | Negative
      -------------------|----------|---------
      Positive           |    TP=50 |   FN=10
      Negative           |    FP=5  |   TN=35
    

Here, TP = True Positives, FP = False Positives, TN = True Negatives, FN = False Negatives.

Precision = TP / (TP + FP) = 50 / (50 + 5) = 0.91

Recall = TP / (TP + FN) = 50 / (50 + 10) = 0.83

F1-score = 2 * (Precision * Recall) / (Precision + Recall) ≈ 0.87

Precision vs Recall tradeoff with Dense layers

Imagine a Dense layer model detecting spam emails:

  • High Precision: Most emails marked as spam really are spam. Good to avoid losing important emails.
  • High Recall: Most spam emails are caught. Good to keep inbox clean but may mark some good emails as spam.

Depending on what matters more, you adjust the model or threshold to favor precision or recall.

What "good" vs "bad" metric values look like for Dense layers

For classification tasks:

  • Good: Accuracy > 85%, Precision and Recall both above 80%, balanced F1-score.
  • Bad: Accuracy high but Recall very low (missing many positives), or Precision very low (many false alarms).

For regression tasks:

  • Good: Low MSE or MAE, meaning predictions are close to actual values.
  • Bad: High error values, predictions far from true values.
Common pitfalls when evaluating Dense layer models
  • Accuracy paradox: High accuracy can be misleading if classes are imbalanced.
  • Data leakage: Using test data during training inflates metrics falsely.
  • Overfitting: Very high training accuracy but low test accuracy means model memorizes data, not generalizes.
  • Ignoring class imbalance: Metrics like accuracy may hide poor performance on minority classes.
Self-check question

Your Dense layer model has 98% accuracy but only 12% recall on fraud detection. Is it good for production? Why or why not?

Answer: No, it is not good. The model misses 88% of fraud cases (low recall), which is dangerous. High accuracy is misleading because fraud is rare. You need to improve recall to catch more fraud.

Key Result
For Dense layers in classification, balance Precision and Recall to ensure reliable predictions; accuracy alone can be misleading.