0
0
TensorFlowml~5 mins

Compiling models (optimizer, loss, metrics) in TensorFlow

Choose your learning style9 modes available
Introduction

Compiling a model sets up how it learns and measures success. It tells the model how to improve and how to check its progress.

When you finish building a neural network and want to train it.
When you want to choose how the model updates itself during learning.
When you want to decide how to measure the model's accuracy or error.
When you want to prepare the model for training with specific goals.
When you want to compare different training methods by changing settings.
Syntax
TensorFlow
model.compile(optimizer='optimizer_name_or_object', loss='loss_function_name_or_object', metrics=['metric1', 'metric2', ...])

optimizer controls how the model learns (e.g., 'adam', 'sgd').

loss is the function that measures how wrong the model is (e.g., 'sparse_categorical_crossentropy').

Examples
Use Adam optimizer, sparse categorical crossentropy loss, and track accuracy.
TensorFlow
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
Use SGD optimizer with a learning rate of 0.01, mean squared error loss, and track mse metric.
TensorFlow
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.01), loss='mse', metrics=['mse'])
Use RMSprop optimizer, binary crossentropy loss, and track accuracy and precision.
TensorFlow
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy', 'Precision'])
Sample Model

This code builds a small neural network, compiles it with Adam optimizer and sparse categorical crossentropy loss, then trains it on random data for 3 epochs. It prints the accuracy after each epoch.

TensorFlow
import tensorflow as tf

# Create a simple model
model = tf.keras.Sequential([
    tf.keras.layers.Dense(10, activation='relu', input_shape=(5,)),
    tf.keras.layers.Dense(3, activation='softmax')
])

# Compile the model with optimizer, loss, and metrics
model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)

# Create some dummy data
import numpy as np
x_train = np.random.random((20, 5))
y_train = np.random.randint(0, 3, 20)

# Train the model
history = model.fit(x_train, y_train, epochs=3, verbose=0)

# Print training accuracy for each epoch
for i, acc in enumerate(history.history['accuracy'], 1):
    print(f'Epoch {i} accuracy: {acc:.4f}')
OutputSuccess
Important Notes

You must compile the model before training it.

Choosing the right optimizer and loss depends on your problem type (classification, regression, etc.).

Metrics help you understand how well the model is doing during training.

Summary

Compiling sets how the model learns and measures success.

Optimizer controls learning steps, loss measures error, metrics track performance.

Always compile before training your model.