0
0
TensorFlowml~12 mins

Activation functions (ReLU, sigmoid, softmax) in TensorFlow - Model Pipeline Trace

Choose your learning style9 modes available
Model Pipeline - Activation functions (ReLU, sigmoid, softmax)

This pipeline shows how data moves through a simple neural network using three common activation functions: ReLU, sigmoid, and softmax. These functions help the model learn by adding non-linear behavior and producing probabilities for classification.

Data Flow - 4 Stages
1Input Layer
1 row x 4 columnsRaw input features representing 4 numeric values1 row x 4 columns
[2.0, -1.0, 0.5, 3.0]
2Hidden Layer with ReLU
1 row x 4 columnsLinear transformation (weights and bias) followed by ReLU activation (max(0, x))1 row x 3 columns
Input [2.0, -1.0, 0.5, 3.0] -> Linear output [1.5, -0.5, 2.0] -> ReLU output [1.5, 0.0, 2.0]
3Hidden Layer with Sigmoid
1 row x 3 columnsLinear transformation followed by sigmoid activation (output between 0 and 1)1 row x 3 columns
Input [1.5, 0.0, 2.0] -> Linear output [0.8, -1.2, 0.5] -> Sigmoid output [0.69, 0.23, 0.62]
4Output Layer with Softmax
1 row x 3 columnsLinear transformation followed by softmax activation (outputs sum to 1, representing class probabilities)1 row x 3 columns
Input [0.69, 0.23, 0.62] -> Linear output [2.0, 1.0, 0.1] -> Softmax output [0.66, 0.24, 0.10]
Training Trace - Epoch by Epoch

Loss
1.2 |*       
1.0 | *      
0.8 |  *     
0.6 |   *    
0.4 |    *   
    +---------
     1 2 3 4 5 Epochs
EpochLoss ↓Accuracy ↑Observation
11.20.45Loss starts high, accuracy low as model begins learning
20.90.60Loss decreases, accuracy improves as activations help model learn
30.70.72Model continues to improve with clearer decision boundaries
40.50.80Loss decreases steadily, accuracy rises showing good learning
50.40.85Model converges with lower loss and higher accuracy
Prediction Trace - 4 Layers
Layer 1: Input Layer
Layer 2: Hidden Layer with ReLU
Layer 3: Hidden Layer with Sigmoid
Layer 4: Output Layer with Softmax
Model Quiz - 3 Questions
Test your understanding
What does the ReLU activation function do to negative input values?
AConverts them to probabilities
BSets them to zero
CLeaves them unchanged
DMaps them between 0 and 1
Key Insight
Activation functions like ReLU, sigmoid, and softmax add important non-linear transformations that help neural networks learn complex patterns and produce meaningful outputs such as probabilities for classification.