0
0
ML Pythonml~12 mins

Simple neural network with scikit-learn in ML Python - Model Pipeline Trace

Choose your learning style9 modes available
Model Pipeline - Simple neural network with scikit-learn

This pipeline shows how a simple neural network learns to classify data using scikit-learn. It starts with raw data, prepares it, trains the model, and then makes predictions.

Data Flow - 5 Stages
1Raw Data
150 rows x 4 columnsLoad Iris dataset with 4 features per sample150 rows x 4 columns
[5.1, 3.5, 1.4, 0.2]
2Train/Test Split
150 rows x 4 columnsSplit data into 120 training and 30 testing samples120 rows x 4 columns (train), 30 rows x 4 columns (test)
[5.0, 3.6, 1.4, 0.2] (train), [6.7, 3.1, 4.7, 1.5] (test)
3Feature Scaling
120 rows x 4 columnsScale features to zero mean and unit variance120 rows x 4 columns
[-0.9, 1.2, -1.3, -1.4]
4Model Training
120 rows x 4 columnsTrain MLPClassifier with 1 hidden layer of 5 neuronsTrained model
Model weights updated after each epoch
5Prediction
30 rows x 4 columnsModel predicts class labels for test samples30 rows x 1 column (predicted labels)
[0, 1, 2, 1, 0]
Training Trace - Epoch by Epoch
Loss
1.2 |****
0.9 |*** 
0.6 |**  
0.3 |*   
0.0 |    
     1 5 10 15 20 Epochs
EpochLoss ↓Accuracy ↑Observation
11.20.50Model starts learning, loss is high, accuracy low
50.70.75Loss decreases, accuracy improves
100.40.85Model is learning well, loss continues to drop
150.250.90Loss low, accuracy high, model converging
200.150.95Training loss minimal, accuracy near perfect
Prediction Trace - 5 Layers
Layer 1: Input Layer
Layer 2: Feature Scaling
Layer 3: Hidden Layer (ReLU activation)
Layer 4: Output Layer (Softmax)
Layer 5: Prediction
Model Quiz - 3 Questions
Test your understanding
What happens to the data shape after feature scaling?
ANumber of columns reduces by half
BNumber of rows doubles
CShape stays the same but values change
DData becomes one-dimensional
Key Insight
This visualization shows how a simple neural network learns by adjusting weights to reduce loss and improve accuracy. Feature scaling helps the model learn faster. The softmax layer turns outputs into probabilities for easy class prediction.