0
0
ML Pythonml~5 mins

Backpropagation concept in ML Python

Choose your learning style9 modes available
Introduction
Backpropagation helps a computer learn by fixing its mistakes step-by-step, making its guesses better over time.
Training a neural network to recognize images like cats and dogs.
Teaching a model to predict house prices from features like size and location.
Improving speech recognition systems to understand spoken words better.
Building a recommendation system that learns user preferences.
Optimizing a model that predicts weather patterns from past data.
Syntax
ML Python
1. Forward pass: Calculate output from input through the network.
2. Calculate error: Find difference between predicted and actual output.
3. Backward pass: Compute gradients of error with respect to weights.
4. Update weights: Adjust weights to reduce error using gradients.
Backpropagation uses the chain rule from calculus to find how each weight affects the error.
It works layer by layer, starting from the output going back to the input.
Examples
This shows the basic math steps in backpropagation for one neuron.
ML Python
Forward pass: output = activation(weights * input + bias)
Error = predicted_output - actual_output
Backward pass: gradient = error * derivative_of_activation
Update weights: weights = weights - learning_rate * gradient
This explains how backpropagation works through multiple layers in a neural network.
ML Python
For each layer:
  Calculate error gradient
  Propagate error to previous layer
  Update weights using gradients and learning rate
Sample Model
This code trains a simple neural network with one neuron to learn a basic pattern using backpropagation. It prints the learned weights, bias, and predictions after training.
ML Python
import numpy as np

# Simple neural network with one input, one neuron, one output

# Activation function: sigmoid and its derivative
def sigmoid(x):
    return 1 / (1 + np.exp(-x))

def sigmoid_derivative(x):
    return x * (1 - x)

# Input data (4 samples, 1 feature)
inputs = np.array([[0], [1], [2], [3]])
# Actual outputs
actual_output = np.array([[0], [0], [1], [1]])

# Initialize weights randomly
weights = np.random.uniform(size=(1,1))
bias = np.random.uniform()

learning_rate = 0.1

for epoch in range(1000):
    # Forward pass
    linear_output = np.dot(inputs, weights) + bias
    predicted_output = sigmoid(linear_output)

    # Calculate error
    error = actual_output - predicted_output

    # Backward pass
    d_predicted_output = error * sigmoid_derivative(predicted_output)

    # Update weights and bias
    weights += learning_rate * np.dot(inputs.T, d_predicted_output)
    bias += learning_rate * np.sum(d_predicted_output)

# Final predictions after training
final_output = sigmoid(np.dot(inputs, weights) + bias)

print("Weights after training:", weights.flatten())
print("Bias after training:", bias)
print("Predictions after training:", final_output.flatten())
OutputSuccess
Important Notes
Backpropagation requires a differentiable activation function like sigmoid or ReLU.
Learning rate controls how big the weight updates are; too big can cause instability, too small slows learning.
Training for more epochs usually improves accuracy but watch out for overfitting.
Summary
Backpropagation fixes errors by moving backward through the network to update weights.
It uses simple math steps repeated many times to help the model learn.
This process is key to training neural networks for many real-world tasks.