Hitesh Sahu
Hitesh SahuHitesh Sahu
  1. Home
  2. ›
  3. posts
  4. ›
  5. …

  6. ›
  7. 4 Model Example II

Loading ⏳
Fetching content, this won’t take long…


💡 Did you know?

🦈 Sharks existed before trees 🌳.

🍪 This website uses cookies

No personal data is stored on our servers however third party tools Google Analytics cookies to measure traffic and improve your website experience. Learn more

Cover Image for Examples and Intuitions II — Building XNOR with a Hidden Layer

Examples and Intuitions II — Building XNOR with a Hidden Layer

In the previous section, we saw how to implement basic logical gates (AND, OR, NOR) using single neurons. However, some functions like XOR and XNOR cannot be represented by a single neuron. In this post, we will see how adding a hidden layer allows us to model the XNOR function.

Hitesh Sahu
Written by Hitesh Sahu, a passionate developer and blogger.

Fri Feb 27 2026

Share This on

Examples and Intuitions II — Building XNOR with a Hidden Layer

Logical Gates as Neural Networks

From the previous section, we constructed single-neuron networks for:

AND

Θ(1)=[−302020]\Theta^{(1)} = \begin{bmatrix} -30 & 20 & 20 \end{bmatrix}Θ(1)=[−30​20​20​]

NOR

Θ(1)=[10−20−20]\Theta^{(1)} = \begin{bmatrix} 10 & -20 & -20 \end{bmatrix}Θ(1)=[10​−20​−20​]

OR

Θ(1)=[−102020]\Theta^{(1)} = \begin{bmatrix} -10 & 20 & 20 \end{bmatrix}Θ(1)=[−10​20​20​]

Constructing XNOR

The logical XNOR operator outputs 1 when:

  • x1=0x_1 = 0x1​=0 and x2=0x_2 = 0x2​=0, or
  • x1=1x_1 = 1x1​=1 and x2=1x_2 = 1x2​=1

In other words, when both inputs are the same.

A single neuron cannot represent XNOR.
We need a hidden layer.


Network Architecture

[x0x1x2]→[a1(2)a2(2)]→a(3)→hΘ(x)\begin{bmatrix} x_0 \\ x_1 \\ x_2 \end{bmatrix} \rightarrow \begin{bmatrix} a^{(2)}_1 \\ a^{(2)}_2 \end{bmatrix} \rightarrow a^{(3)} \rightarrow h_\Theta(x)​x0​x1​x2​​​→[a1(2)​a2(2)​​]→a(3)→hΘ​(x)

Where:

  • Hidden unit 1 implements AND
  • Hidden unit 2 implements NOR
  • Output layer implements OR

Step 1 — First Layer (AND + NOR)

We combine AND and NOR into one matrix:

Θ(1)=[−30202010−20−20]\Theta^{(1)} = \begin{bmatrix} -30 & 20 & 20 \\ 10 & -20 & -20 \end{bmatrix}Θ(1)=[−3010​20−20​20−20​]

This gives:

a(2)=g(Θ(1)x)a^{(2)} = g(\Theta^{(1)} x)a(2)=g(Θ(1)x)

So:

  • a1(2)a^{(2)}_1a1(2)​ behaves like AND
  • a2(2)a^{(2)}_2a2(2)​ behaves like NOR

Step 2 — Second Layer (OR)

Now we combine the hidden outputs using OR:

Θ(2)=[−102020]\Theta^{(2)} = \begin{bmatrix} -10 & 20 & 20 \end{bmatrix}Θ(2)=[−10​20​20​]

This gives:

a(3)=g(Θ(2)a(2))a^{(3)} = g(\Theta^{(2)} a^{(2)})a(3)=g(Θ(2)a(2))

Final hypothesis:

hΘ(x)=a(3)h_\Theta(x) = a^{(3)}hΘ​(x)=a(3)

Full Computation

The forward propagation is:

a(2)=g(Θ(1)x)a^{(2)} = g(\Theta^{(1)} x)a(2)=g(Θ(1)x) a(3)=g(Θ(2)a(2))a^{(3)} = g(\Theta^{(2)} a^{(2)})a(3)=g(Θ(2)a(2)) hΘ(x)=a(3)h_\Theta(x) = a^{(3)}hΘ​(x)=a(3)

Why This Works

The hidden layer computes:

  • AND(x₁, x₂)
  • NOR(x₁, x₂)

The output layer computes:

OR(AND,NOR)\text{OR}(\text{AND}, \text{NOR})OR(AND,NOR)

Which is exactly:

XNOR(x1,x2)\text{XNOR}(x_1, x_2)XNOR(x1​,x2​)

Key Insight

  • Single logistic neuron → can model AND, OR, NOR
  • Cannot model XOR or XNOR
  • Adding one hidden layer enables nonlinear decision boundaries

This is the first concrete example of why hidden layers matter.

AI-DeepLearning/4-Model-Example-II
Let's work together
+49 176-2019-2523
hiteshkrsahu@gmail.com
WhatsApp
Skype
Munich 🥨, Germany 🇩🇪, EU
Playstore
Hitesh Sahu's apps on Google Play Store
Need Help?
Let's Connect
Navigation
  Home/About
  Skills
  Work/Projects
  Lab/Experiments
  Contribution
  Awards
  Art/Sketches
  Thoughts
  Contact
Links
  Sitemap
  Legal Notice
  Privacy Policy

Made with

NextJS logo

NextJS by

hitesh Sahu

| © 2026 All rights reserved.