Complete the code to load the content image for style transfer.
content_image = load_image('[1]')
The content image is the main image whose content we want to keep. We load it using the filename 'content.jpg'.
Complete the code to extract features from the style image using a pre-trained model.
style_features = model.[1](style_image)To get style features, we use the method 'extract_features' which returns the needed representations from the style image.
Fix the error in the loss calculation for style transfer.
style_loss = tf.reduce_mean(tf.square([1] - target_style))The style loss compares the style features of the generated image to the target style features, so we subtract 'style_features' from 'target_style'.
Fill both blanks to create a dictionary of style and content weights for the loss function.
loss_weights = {'style': [1], 'content': [2]Common practice is to set style weight higher (1e-2) and content weight lower (1e-3) to balance the loss.
Fill all three blanks to update the generated image using gradient descent.
with tf.GradientTape() as tape: outputs = model([1]) loss = compute_loss(outputs, [2], [3]) grads = tape.gradient(loss, [1]) optimizer.apply_gradients([(grads, [1])])
The generated image is updated by computing gradients of the loss, which compares outputs to style and content targets.