Challenge - 5 Problems
Forward Propagation Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of a simple forward propagation step
Given the following code for a single neuron forward propagation, what is the output value?
ML Python
import numpy as np inputs = np.array([1.0, 2.0, 3.0]) weights = np.array([0.2, 0.5, -0.3]) bias = 0.1 output = np.dot(inputs, weights) + bias print(output)
Attempts:
2 left
💡 Hint
Remember to multiply each input by its weight, sum them, then add the bias.
✗ Incorrect
The dot product is (1.0*0.2) + (2.0*0.5) + (3.0*(-0.3)) = 0.2 + 1.0 - 0.9 = 0.3. Adding bias 0.1 gives 0.4.
❓ Model Choice
intermediate1:30remaining
Choosing the correct activation function for forward propagation
Which activation function is best suited for a binary classification problem during forward propagation?
Attempts:
2 left
💡 Hint
Think about output values between 0 and 1 representing probabilities.
✗ Incorrect
Sigmoid outputs values between 0 and 1, which can be interpreted as probabilities for binary classification.
❓ Hyperparameter
advanced2:00remaining
Effect of learning rate on forward propagation outputs
If the learning rate is set too high during training, how does it affect the forward propagation outputs in subsequent iterations?
Attempts:
2 left
💡 Hint
Consider how big steps in weight updates affect predictions.
✗ Incorrect
A high learning rate causes large weight updates, making outputs jump around and preventing convergence.
❓ Metrics
advanced2:00remaining
Calculating accuracy from forward propagation predictions
A model outputs the following predictions after forward propagation for 5 samples: [0.9, 0.4, 0.6, 0.3, 0.8]. The true labels are [1, 0, 1, 0, 1]. Using a threshold of 0.5, what is the accuracy?
Attempts:
2 left
💡 Hint
Convert predictions to 0 or 1 using threshold, then compare to true labels.
✗ Incorrect
Predictions thresholded at 0.5: [1, 0, 1, 0, 1], which exactly matches the true labels [1, 0, 1, 0, 1]. So 5 out of 5 correct = 100%.
🔧 Debug
expert2:30remaining
Identifying the bug in forward propagation code
What error does the following code raise during forward propagation?
ML Python
import numpy as np def forward_prop(inputs, weights, bias): return np.dot(weights, inputs) + bias inputs = np.array([1, 2, 3]) weights = np.array([0.2, 0.5]) bias = 0.1 output = forward_prop(inputs, weights, bias) print(output)
Attempts:
2 left
💡 Hint
Check if input and weight arrays have matching sizes for dot product.
✗ Incorrect
The weights array has length 2 but inputs length 3, so np.dot(weights, inputs) raises a ValueError due to shape mismatch.