Examples and Intuitions I — Neural Networks as Logical Gates
A simple example of applying neural networks is predicting logical operations like AND and OR. By choosing appropriate weights and bias, a single logistic neuron can simulate these gates. This illustrates the power of neural networks to represent complex functions by stacking simple units.
Vectorized Neural Networks Model Representation
Examples and Intuitions II — Building XNOR with a Hidden Layer
Neural Networks as Logical Gates
A single logistic neuron can simulate logical gates.
By adjusting:
- Bias (threshold)
- Weights (importance of inputs)
we can model:
- AND
- OR
- NOT
Neural networks are powerful because stacking these simple units allows us to represent much more complex functions.
Implementing the AND Operator
The logical AND operator is true only when:
Otherwise, it is false.
| Result | ||
|---|---|---|
| 0 | 0 | 0 |
| 1 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 1 | 1 |
Network Structure
graph LR
%% Input Layer
subgraph Input Layer
x0(((x0)))
x1(((x1)))
x2(((x2)))
end
%% Hidden Layer 1
subgraph Hidden Layer 1
a1{a1}
end
%% Output Layer
subgraph Output Layer
y(((hθx)))
end
%% Connections: Input → Hidden 1
x0 --> a1
x1 --> a1
x2 --> a1
%% Connections: Hidden 2 → Output
a1 --> y
Our small neural network looks like:
Where: is the bias unit
Choosing the Weights
Consider weight matrix:
The hypothesis becomes:
Evaluating All Input Combinations
| Expected | |||
|---|---|---|---|
| 0 | 0 | 0 | |
| 1 | 0 | 0 | |
| 0 | 1 | 0 | |
| 1 | 1 | 1 |
Conclusion
With this choice of weights:
the neural network behaves exactly like an AND gate.
Implementing the OR Operator
The logical OR operator is true when:
- , or
- , or both
| Result | ||
|---|---|---|
| 0 | 0 | 0 |
| 1 | 0 | 1 |
| 0 | 1 | 1 |
| 1 | 1 | 1 |
We can implement OR using a different set of weights:
The hypothesis becomes:
Evaluating All Input Combinations
| Expected | |||
|---|---|---|---|
| 0 | 0 | 0 | |
| 1 | 0 | 0 | |
| 0 | 1 | 0 | |
| 1 | 1 | 1 |
Conclusion
With this choice of weights:
the same neural network behaves exactly like an OR gate.
Implementing Not Gate ( )
graph LR
%% Input Layer
subgraph Input Layer
x0(((x0)))
x1(((x1)))
end
%% Hidden Layer 1
subgraph Hidden Layer 1
a1{a1}
end
%% Output Layer
subgraph Output Layer
y(((hθx)))
end
%% Connections: Input → Hidden 1
x0 --> a1
x1 --> a1
%% Connections: Hidden 2 → Output
a1 --> y
The logical NOT operator is true when:
and vice versa
| Result | |
|---|---|
| 0 | 1 |
| 1 | 0 |
We can implement NOT using weights:
The hypothesis becomes:
| Expected | ||
|---|---|---|
| 0 | 1 | |
| 1 | 0 |
Summary
We can use weight to simulate Logic gates with Neural networks
