0
0
ML Pythonml~5 mins

Neural network architecture in ML Python - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is a neural network architecture?
It is the design or layout of a neural network, showing how layers and neurons connect to process information.
Click to reveal answer
beginner
Name the three main types of layers in a simple neural network.
Input layer, hidden layer(s), and output layer.
Click to reveal answer
intermediate
What role do activation functions play in neural networks?
They help the network learn complex patterns by adding non-linearity to the output of neurons.
Click to reveal answer
intermediate
Explain the difference between a shallow and a deep neural network.
A shallow network has few hidden layers, while a deep network has many hidden layers, allowing it to learn more complex features.
Click to reveal answer
beginner
What is the purpose of weights and biases in a neural network?
Weights control the strength of connections between neurons, and biases allow shifting the activation function to better fit data.
Click to reveal answer
Which layer in a neural network receives the raw input data?
AActivation layer
BHidden layer
COutput layer
DInput layer
What does adding more hidden layers to a neural network usually allow it to do?
ALearn simpler patterns
BReduce training time
CLearn more complex patterns
DAvoid overfitting
Which of these is NOT a common activation function?
AReLU
BLinear Regression
CSoftmax
DSigmoid
What do weights in a neural network represent?
AThe strength of connections between neurons
BThe input data
CThe output predictions
DThe number of layers
Why do neural networks use biases?
ATo shift activation functions for better learning
BTo reduce the number of neurons
CTo store input data
DTo add randomness
Describe the basic structure of a neural network and the role of each layer.
Think about how data enters, gets processed, and then produces results.
You got /4 concepts.
    Explain why activation functions are important in neural networks.
    Consider what would happen if neurons only did simple math without activation.
    You got /3 concepts.