Prefetching helps your model get data faster by preparing the next batch while it is still training on the current one. This makes training smoother and quicker.
Prefetching for performance in TensorFlow
dataset = dataset.prefetch(buffer_size=tf.data.AUTOTUNE)
buffer_size controls how many batches to prepare in advance.
Using tf.data.AUTOTUNE lets TensorFlow decide the best buffer size automatically.
dataset = dataset.prefetch(1)dataset = dataset.prefetch(tf.data.AUTOTUNE)
This code creates a dataset of numbers, squares them, batches them in groups of 2, and uses prefetching to prepare the next batch while the current one is processed. It then prints each batch.
import tensorflow as tf # Create a simple dataset of numbers 0 to 9 raw_dataset = tf.data.Dataset.range(10) # Map a function to square each number mapped_dataset = raw_dataset.map(lambda x: x * x) # Batch the data batched_dataset = mapped_dataset.batch(2) # Add prefetching to improve performance prefetched_dataset = batched_dataset.prefetch(tf.data.AUTOTUNE) # Iterate and print batches for batch in prefetched_dataset: print(batch.numpy())
Prefetching works best when your data loading or preprocessing is slower than model training.
Using tf.data.AUTOTUNE is recommended for most cases to let TensorFlow optimize performance.
Prefetching does not change your data; it only speeds up how fast data is fed to the model.
Prefetching prepares data batches ahead of time to reduce waiting during training.
Use dataset.prefetch(tf.data.AUTOTUNE) for automatic buffer size tuning.
It helps keep your GPU or TPU busy and speeds up training.