0
0
TensorFlowml~5 mins

Batch size and epochs in TensorFlow

Choose your learning style9 modes available
Introduction

Batch size and epochs help control how a machine learns from data step by step. They make training faster and better.

When training a model on a large dataset that cannot fit into memory all at once.
When you want to control how many times the model sees the entire dataset.
When you want to balance training speed and model accuracy.
When experimenting to find the best training settings for your model.
When you want to avoid overfitting or underfitting during training.
Syntax
TensorFlow
model.fit(x_train, y_train, batch_size=32, epochs=10)

batch_size is how many samples the model looks at before updating.

epochs is how many times the model sees the whole dataset.

Examples
The model trains on 64 samples at a time and repeats the whole dataset 5 times.
TensorFlow
model.fit(x_train, y_train, batch_size=64, epochs=5)
The model trains on bigger chunks of data (128 samples) and learns longer (20 times over the dataset).
TensorFlow
model.fit(x_train, y_train, batch_size=128, epochs=20)
If batch_size is not set, TensorFlow uses a default batch size of 32.
TensorFlow
model.fit(x_train, y_train, epochs=10)
Sample Model

This program trains a simple neural network on handwritten digits. It uses batch size 64 and runs through the data 3 times (epochs). After training, it shows the accuracy on test data.

TensorFlow
import tensorflow as tf
from tensorflow.keras import layers, models

# Load simple dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

# Normalize data
x_train = x_train / 255.0
x_test = x_test / 255.0

# Build a simple model
model = models.Sequential([
    layers.Flatten(input_shape=(28, 28)),
    layers.Dense(128, activation='relu'),
    layers.Dense(10, activation='softmax')
])

# Compile model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Train model with batch_size=64 and epochs=3
history = model.fit(x_train, y_train, batch_size=64, epochs=3, verbose=2)

# Evaluate model
loss, accuracy = model.evaluate(x_test, y_test, verbose=0)
print(f"Test accuracy: {accuracy:.4f}")
OutputSuccess
Important Notes

Smaller batch sizes use less memory but take longer to train.

More epochs let the model learn more but can cause overfitting if too many.

Try different batch sizes and epochs to find the best balance for your data.

Summary

Batch size controls how many samples the model sees before updating.

Epochs control how many times the model sees the whole dataset.

Choosing the right batch size and epochs helps your model learn well and train efficiently.