You want to build a neural network to recognize objects in photos. Which layer type is best to start with for extracting features like edges and shapes?
Think about which layer type is designed to scan images for patterns.
Convolutional layers scan images with small filters to detect edges and shapes, making them ideal for image tasks.
You have a dense (fully connected) layer with 128 neurons. If the input to this layer has shape (batch_size, 64), what will be the output shape?
Each neuron produces one output per input example.
A dense layer outputs one value per neuron for each input example, so output shape is (batch_size, number_of_neurons).
What is the output shape of the tensor after applying the following layers in order?
Input shape: (32, 64, 64, 3) # batch_size=32, 64x64 RGB images Conv2D(filters=16, kernel_size=3, padding='same') MaxPooling2D(pool_size=2)
Remember 'same' padding keeps width and height, pooling halves them.
The Conv2D with 'same' padding keeps the spatial size at 64x64 and changes channels to 16. MaxPooling with pool size 2 halves width and height to 32x32.
Consider this simple neural network code snippet:
model = Sequential() model.add(Dense(64, input_shape=(100,))) model.add(Conv2D(32, kernel_size=3)) model.add(Flatten()) model.add(Dense(10, activation='softmax'))
What error will this code raise when run?
Check the input shape expected by Conv2D layers.
Conv2D layers expect 4D input (batch, height, width, channels), but Dense outputs 2D (batch, features), causing a ValueError.
What is the main effect of increasing the number of layers (depth) in a neural network architecture?
Think about how deeper networks learn features step-by-step.
More layers allow the model to learn complex, hierarchical features, improving its capacity to understand data patterns.