Deep learning models have many layers. How does having more layers help them understand complex data patterns?
Think about how building blocks combine to form bigger structures.
Deep learning models build understanding step-by-step. Early layers find simple patterns like edges, and later layers combine these into complex shapes or concepts.
Given the following code that applies two layers of transformations, what is the output?
import numpy as np def relu(x): return np.maximum(0, x) input_data = np.array([1, -2, 3]) layer1 = relu(input_data * 2) layer2 = relu(layer1 - 1) print(layer2)
Calculate step-by-step: multiply, apply relu, subtract, apply relu again.
Input multiplied by 2: [2, -4, 6]. ReLU sets negatives to 0: [2, 0, 6]. Subtract 1: [1, -1, 5]. ReLU again: [1, 0, 5].
But option D shows [2, 0, 4], so check carefully.
You want to recognize complex objects in images, such as animals in different poses and backgrounds. Which model type is best suited?
Think about models that can learn spatial features and patterns in images.
CNNs are designed to capture spatial hierarchies in images by using convolutional layers that detect edges, textures, and complex shapes through multiple layers.
What is a common effect of increasing the number of layers (depth) in a deep learning model?
Think about training difficulties with very deep networks.
Increasing depth allows learning complex features but can cause training issues like vanishing gradients, making optimization harder.
A deep learning model shows decreasing training loss but constant validation loss over epochs. What does this indicate?
Think about what it means when training improves but validation does not.
When training loss decreases but validation loss stays the same or increases, the model fits training data too closely and fails to generalize.