Complete the code to apply pruning to a model layer.
import tensorflow_model_optimization as tfmot prune_low_magnitude = tfmot.sparsity.keras.[1](pruning_schedule) pruned_model = prune_low_magnitude(model)
The function prune_low_magnitude is used to apply pruning to a model.
Complete the code to convert a TensorFlow model to a quantized TFLite model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.[1].OPTIMIZE_FOR_SIZE]
tflite_quant_model = converter.convert()The tf.lite.Optimize optimization flag enables quantization during TFLite conversion.
Fix the error in the pruning callback setup.
callbacks = [tfmot.sparsity.keras.UpdatePruningStep(), tfmot.sparsity.keras.[1](log_dir)]The correct callback for logging pruning summaries is PruningSummaries.
Fill both blanks to create a dictionary comprehension that maps layer names to their pruning status.
pruning_status = {layer.name: layer.[1] for layer in model.layers if hasattr(layer, '[2]')}We check if a layer has attribute 'pruned' and get its pruning status from 'pruned'.
Fill all three blanks to create a quantization-aware training model setup.
import tensorflow_model_optimization as tfmot quantize_model = tfmot.quantization.keras.[1](model) quantize_model.compile(optimizer=[2], loss=[3])
The function to prepare a model for quantization-aware training is quantize_annotate_model. The optimizer is 'adam' and the loss is 'sparse_categorical_crossentropy'.