Compiling a model sets up how it learns and measures success. It tells the model how to improve and how to check its progress.
Compiling models (optimizer, loss, metrics) in TensorFlow
model.compile(optimizer='optimizer_name_or_object', loss='loss_function_name_or_object', metrics=['metric1', 'metric2', ...])
optimizer controls how the model learns (e.g., 'adam', 'sgd').
loss is the function that measures how wrong the model is (e.g., 'sparse_categorical_crossentropy').
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.01), loss='mse', metrics=['mse'])
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy', 'Precision'])
This code builds a small neural network, compiles it with Adam optimizer and sparse categorical crossentropy loss, then trains it on random data for 3 epochs. It prints the accuracy after each epoch.
import tensorflow as tf # Create a simple model model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='relu', input_shape=(5,)), tf.keras.layers.Dense(3, activation='softmax') ]) # Compile the model with optimizer, loss, and metrics model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) # Create some dummy data import numpy as np x_train = np.random.random((20, 5)) y_train = np.random.randint(0, 3, 20) # Train the model history = model.fit(x_train, y_train, epochs=3, verbose=0) # Print training accuracy for each epoch for i, acc in enumerate(history.history['accuracy'], 1): print(f'Epoch {i} accuracy: {acc:.4f}')
You must compile the model before training it.
Choosing the right optimizer and loss depends on your problem type (classification, regression, etc.).
Metrics help you understand how well the model is doing during training.
Compiling sets how the model learns and measures success.
Optimizer controls learning steps, loss measures error, metrics track performance.
Always compile before training your model.