Activation functions like ReLU, sigmoid, and softmax help a model learn complex patterns by deciding how much signal passes through each neuron.
To evaluate models using these activations, we focus on metrics like accuracy for classification tasks, cross-entropy loss to measure prediction quality, and probability calibration especially with softmax outputs.
Why? Because activation functions shape the output values, which affect how well the model predicts classes or probabilities.