Hitesh Sahu
Hitesh SahuHitesh Sahu
  1. Home
  2. ›
  3. posts
  4. ›
  5. …

  6. ›
  7. 3 Model Example

Loading ⏳
Fetching content, this won’t take long…


💡 Did you know?

🍌 Bananas are berries, but strawberries are not.

🍪 This website uses cookies

No personal data is stored on our servers however third party tools Google Analytics cookies to measure traffic and improve your website experience. Learn more

Cover Image for Examples and Intuitions I — Neural Networks as Logical Gates

Examples and Intuitions I — Neural Networks as Logical Gates

A simple example of applying neural networks is predicting logical operations like AND and OR. By choosing appropriate weights and bias, a single logistic neuron can simulate these gates. This illustrates the power of neural networks to represent complex functions by stacking simple units.

Hitesh Sahu
Written by Hitesh Sahu, a passionate developer and blogger.

Fri Feb 27 2026

Share This on

← Previous

Vectorized Neural Networks Model Representation

Next →

Examples and Intuitions II — Building XNOR with a Hidden Layer

Neural Networks as Logical Gates

A single logistic neuron can simulate logical gates.

By adjusting:

  • Bias (threshold)
  • Weights (importance of inputs)

we can model:

  • AND
  • OR
  • NOT

Neural networks are powerful because stacking these simple units allows us to represent much more complex functions.

Implementing the AND Operator (x1∧x2)(x_1 \land x_2)(x1​∧x2​)

The logical AND operator is true only when:

  • x1=1x_1 = 1x1​=1
  • x2=1x_2 = 1x2​=1

Otherwise, it is false.

x1x_1x1​ x2x_2x2​ Result
0 0 0
1 0 0
0 1 0
1 1 1

Network Structure


graph LR

%% Input Layer
    subgraph Input Layer
        x0(((x0)))
        x1(((x1)))
        x2(((x2)))
    end

%% Hidden Layer 1
    subgraph Hidden Layer 1
        a1{a1}
    end

%% Output Layer
    subgraph Output Layer
        y(((hθx)))
    end

%% Connections: Input → Hidden 1
    x0 --> a1
    x1 --> a1
    x2 --> a1
    
%% Connections: Hidden 2 → Output
    a1 --> y
 

Our small neural network looks like:

[x0x1x2]→g(z(2))→hΘ(x)\begin{bmatrix} x_0 \\ x_1 \\ x_2 \end{bmatrix} \rightarrow g(z^{(2)}) \rightarrow h_\Theta(x)​x0​x1​x2​​​→g(z(2))→hΘ​(x)

Where: x0=1x_0 = 1x0​=1 is the bias unit

Choosing the Weights

Consider weight matrix:

Θ(1)=[−302020]\Theta^{(1)} = \begin{bmatrix} -30 & 20 & 20 \end{bmatrix}Θ(1)=[−30​20​20​]

The hypothesis becomes:

hΘ(x)=g(−30+20x1+20x2)h_\Theta(x) = g(-30 + 20x_1 + 20x_2)hΘ​(x)=g(−30+20x1​+20x2​)

Evaluating All Input Combinations

x1x_1x1​ x2x_2x2​ Expected hθ(x)h_\theta(x)hθ​(x)
0 0 0 g(−30)≈0g(-30) \approx 0g(−30)≈0
1 0 0 g(−10)≈0g(-10) \approx 0g(−10)≈0
0 1 0 g(−10)≈0g(-10) \approx 0g(−10)≈0
1 1 1 g(10)≈1g(10) \approx 1g(10)≈1

Conclusion

With this choice of weights:

Θ(1)=[−302020]\Theta^{(1)} = \begin{bmatrix} -30 & 20 & 20 \end{bmatrix}Θ(1)=[−30​20​20​]

the neural network behaves exactly like an AND gate.


Implementing the OR Operator (x1∨x2)(x_1 \lor x_2)(x1​∨x2​)

The logical OR operator is true when:

  • x1=1x_1 = 1x1​=1, or
  • x2=1x_2 = 1x2​=1, or both
x1x_1x1​ x2x_2x2​ Result
0 0 0
1 0 1
0 1 1
1 1 1

We can implement OR using a different set of weights:

Θ(1)=[−102020]\Theta^{(1)} = \begin{bmatrix} -10 & 20 & 20 \end{bmatrix}Θ(1)=[−10​20​20​]

The hypothesis becomes:

hΘ(x)=g(−10+20x1+20x2)h_\Theta(x) = g(-10 + 20x_1 + 20x_2)hΘ​(x)=g(−10+20x1​+20x2​)

Evaluating All Input Combinations

x1x_1x1​ x2x_2x2​ Expected hθ(x)h_\theta(x)hθ​(x)
0 0 0 g(−10)≈0g(-10) \approx 0g(−10)≈0
1 0 0 g(10)≈1g(10) \approx 1g(10)≈1
0 1 0 g(10)≈1g(10) \approx 1g(10)≈1
1 1 1 g(30)≈1g(30) \approx 1g(30)≈1

Conclusion

With this choice of weights:

Θ(1)=[−102020]\Theta^{(1)} = \begin{bmatrix} -10 & 20 & 20 \end{bmatrix}Θ(1)=[−10​20​20​]

the same neural network behaves exactly like an OR gate.


Implementing Not Gate (¬x1\neg x_1¬x1​ )


graph LR

%% Input Layer
    subgraph Input Layer
        x0(((x0)))
        x1(((x1)))
    end

%% Hidden Layer 1
    subgraph Hidden Layer 1
        a1{a1}
    end

%% Output Layer
    subgraph Output Layer
        y(((hθx)))
    end

%% Connections: Input → Hidden 1
    x0 --> a1
    x1 --> a1
    
%% Connections: Hidden 2 → Output
    a1 --> y
 

The logical NOT operator is true when:

  • x1=0x_1 = 0x1​=0

and vice versa

x1x_1x1​ Result
0 1
1 0

We can implement NOT using weights:

Θ(1)=[10−20]\Theta^{(1)} = \begin{bmatrix} 10 & -20 \end{bmatrix}Θ(1)=[10​−20​]

The hypothesis becomes:

hΘ(x)=g(10−20x1)h_\Theta(x) = g(10 - 20x_1)hΘ​(x)=g(10−20x1​)
x1x_1x1​ Expected hθ(x)h_\theta(x)hθ​(x)
0 1 g(10)≈1g(10) \approx 1g(10)≈1
1 0 g(−10)≈0g(-10) \approx 0g(−10)≈0

Summary

We can use weight to simulate Logic gates with Neural networks

AND

Θ(1)=[−302020]\Theta^{(1)} = \begin{bmatrix} -30 & 20 & 20 \end{bmatrix}Θ(1)=[−30​20​20​]

OR

Θ(1)=[−102020]\Theta^{(1)} = \begin{bmatrix} -10 & 20 & 20 \end{bmatrix}Θ(1)=[−10​20​20​]

NOT

Θ(1)=[10−20]\Theta^{(1)} = \begin{bmatrix} 10 & -20 \end{bmatrix}Θ(1)=[10​−20​]

NOR = NOT OR

Θ(1)=[10−20−20]\Theta^{(1)} = \begin{bmatrix} 10 & -20 & -20 \end{bmatrix}Θ(1)=[10​−20​−20​]
AI-DeepLearning/3-Model-Example
Let's work together
+49 176-2019-2523
hiteshkrsahu@gmail.com
WhatsApp
Skype
Munich 🥨, Germany 🇩🇪, EU
Playstore
Hitesh Sahu's apps on Google Play Store
Need Help?
Let's Connect
Navigation
  Home/About
  Skills
  Work/Projects
  Lab/Experiments
  Contribution
  Awards
  Art/Sketches
  Thoughts
  Contact
Links
  Sitemap
  Legal Notice
  Privacy Policy

Made with

NextJS logo

NextJS by

hitesh Sahu

| © 2026 All rights reserved.