Forward Propagation in Neural Networks
Understand how forward propagation works in neural networks. Learn how inputs move through layers, how weights and biases transform data, and how activation functions generate predictions in deep learning models.
⏩ Forward Propagation (FP)
Forward propagation computes the hypothesis
by passing the input through each layer of the neural network for each layer.
General Form
For any Network layer :
Linear Term: preactivation
Activation Term
Or when look Forward we can rewrite:
Where
- = activations of layer
- = linear combination before activation
- = weight matrix between layer and
- = activation function
Advance Example: 4 Layer Neural Network:
Example: Assume a 4-layer neural network.
- Input layer: 3 units
- Hidden layer 1: 3 units
- Hidden layer 2: 3 units
- Output layer: 1 unit
graph LR
%% Input Layer
subgraph Input Layer
x1(((x1)))
x2(((x2)))
x3(((x3)))
end
%% Hidden Layer 1
subgraph Hidden Layer 1
a1{a1}
a2{a2}
a3{a3}
end
%% Hidden Layer 2
subgraph Hidden Layer 2
b1{b1}
b2{b2}
b3{b3}
end
%% Output Layer
subgraph Output Layer
y(((hθx)))
end
%% Connections: Input → Hidden 1
x1 --> a1
x1 --> a2
x1 --> a3
x2 --> a1
x2 --> a2
x2 --> a3
x3 --> a1
x3 --> a2
x3 --> a3
%% Connections: Hidden 1 → Hidden 2
a1 --> b1
a1 --> b2
a1 --> b3
a2 --> b1
a2 --> b2
a2 --> b3
a3 --> b1
a3 --> b2
a3 --> b3
%% Connections: Hidden 2 → Output
b1 --> y
b2 --> y
b3 --> y
Weight Matrix:
Layer 1 (Input Layer)
Forward Pass
With bias term:
Layer 2
Linear step:
Activation Function:
Add bias:
Activation of Neurons in Layer 2
First Neuron in layer 2
Second Neuron in layer 2
Third Neuron in layer 2
Generalized
for
Layer 3
Linear step:
Activation:
(Add bias if needed.)
Activation of Neurons in Layer 3
For each neuron in layer 3:
for
Layer 4 (Output Layer) : Hypothesis
Linear step:
Final activation:
The final hypothesis is First neuron of 3rd layer
