0
0
TensorFlowml~5 mins

Batch size and epochs in TensorFlow - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is batch size in machine learning?
Batch size is the number of training examples the model looks at before updating its internal settings (weights). It controls how many samples are processed together in one go.
Click to reveal answer
beginner
Define epoch in the context of training a model.
An epoch is one full pass through the entire training dataset. After one epoch, the model has seen every training example once.
Click to reveal answer
intermediate
How does increasing batch size affect training speed and memory?
Increasing batch size usually speeds up training because more data is processed at once, but it also uses more memory. Very large batches may reduce the model's ability to learn well.
Click to reveal answer
intermediate
What happens if you increase the number of epochs too much?
If you train for too many epochs, the model might memorize the training data and perform poorly on new data. This is called overfitting.
Click to reveal answer
beginner
In TensorFlow, how do you specify batch size and epochs when training a model?
You specify batch size and epochs in the model's fit() method like this: model.fit(x_train, y_train, batch_size=32, epochs=10). This trains the model with batches of 32 samples for 10 full passes over the data.
Click to reveal answer
What does one epoch represent in training?
AOne batch of data processed
BOne update of model weights
COne pass through the entire training dataset
DOne prediction made by the model
If batch size is 64, what does this mean?
AThe model trains on 64 epochs
BThe model updates weights after every 64 samples
CThe model sees 64 batches per epoch
DThe model uses 64 layers
What is a risk of training with too many epochs?
AOverfitting the data
BLess memory usage
CFaster training
DUnderfitting the data
Which of these is true about increasing batch size?
AIt decreases the number of epochs needed
BIt reduces memory usage
CIt always improves model accuracy
DIt can speed up training but uses more memory
How do you set batch size and epochs in TensorFlow's model.fit()?
Amodel.fit(x_train, y_train, batch_size=32, epochs=10)
Bmodel.train(batch_size=32, epochs=10)
Cmodel.fit(batch=32, epoch=10)
Dmodel.train(x_train, y_train, batch=32, epoch=10)
Explain in your own words what batch size and epochs mean in training a machine learning model.
Think about how many samples the model sees at once and how many times it sees the whole dataset.
You got /3 concepts.
    Describe the effects of choosing very large batch sizes and very high numbers of epochs on model training.
    Consider both computer resources and model performance.
    You got /4 concepts.