Hitesh Sahu
Hitesh SahuHitesh Sahu
  1. Home
  2. ›
  3. posts
  4. ›
  5. …

  6. ›
  7. 9 Regularized Logistic Regression

Loading ⏳
Fetching content, this won’t take long…


💡 Did you know?

🦥 Sloths can hold their breath longer than dolphins 🐬.

🍪 This website uses cookies

No personal data is stored on our servers however third party tools Google Analytics cookies to measure traffic and improve your website experience. Learn more

Loading ⏳
Fetching content, this won’t take long…


💡 Did you know?

🤯 Your stomach gets a new lining every 3–4 days.
Cover Image for Regularized Logistic Regression

Regularized Logistic Regression

Regularization helps prevent overfitting by penalizing large weights. Compared to the non-regularized model, the regularized version produces smoother decision boundaries.

Hitesh Sahu
Written by Hitesh Sahu, a passionate developer and blogger.

Fri Feb 27 2026

Share This on

← Previous

Regularized Linear Regression

Next →

Recommender Systems: Collaborative Filtering, Content-Based Filtering, and Hybrid Approaches

⚖️ Regularized Logistic Regression

Regularization helps prevent overfitting by penalizing large weights.

Compared to the non-regularized model, the regularized version produces smoother decision boundaries.

Cost Function (Without Regularization)

Recall the logistic regression cost function:

J(θ)=−1m∑i=1m[y(i)log⁡(hθ(x(i)))+(1−y(i))log⁡(1−hθ(x(i)))]J(\theta) = - \frac{1}{m} \sum_{i=1}^{m} \left[ y^{(i)} \log(h_\theta(x^{(i)})) + (1 - y^{(i)}) \log(1 - h_\theta(x^{(i)})) \right]J(θ)=−m1​i=1∑m​[y(i)log(hθ​(x(i)))+(1−y(i))log(1−hθ​(x(i)))]

Cost Function With Regularization

We add a L2 penalty term: penalty term:

J(θ)=−1m∑i=1m[y(i)log⁡(hθ(x(i)))+(1−y(i))log⁡(1−hθ(x(i)))]+λ2m∑j=1nθj2J(\theta) = - \frac{1}{m} \sum_{i=1}^{m} \left[ y^{(i)} \log(h_\theta(x^{(i)})) + (1 - y^{(i)}) \log(1 - h_\theta(x^{(i)})) \right] + \frac{\lambda}{2m} \sum_{j=1}^{n} \theta_j^2J(θ)=−m1​i=1∑m​[y(i)log(hθ​(x(i)))+(1−y(i))log(1−hθ​(x(i)))]+2mλ​j=1∑n​θj2​

Regularization term

The Regularization term is:

∑j=1nθj2\sum_{j=1}^{n} \theta_j^2j=1∑n​θj2​

The parameter vector contains:

θ1,…,θn\theta_1, \dots, \theta_nθ1​,…,θn​

Explicitly excludes the bias term θ0\theta_0θ0​.

  • Regularization runs from j=1j = 1j=1 to nnn
  • So θ0\theta_0θ0​ is not penalized

Why Exclude θ0\theta_0θ0​?

The bias term controls the decision boundary shift.

We do not want to shrink it toward zero.

Only the other parameters are regularized.


Gradient Descent With Regularization

repeat until convergence: {

For j=0j = 0j=0 (bias):

No regularization term here for θ0\theta_0θ0​:

θ0:=θ0−α1m∑i=1m(hθ(x(i))−y(i))x0(i)\theta_0 := \theta_0 - \alpha \frac{1}{m} \sum_{i=1}^{m} \left( h_\theta(x^{(i)}) - y^{(i)} \right) x_0^{(i)}θ0​:=θ0​−αm1​i=1∑m​(hθ​(x(i))−y(i))x0(i)​

For j≥1j \ge 1j≥1:

Update for 𝑗=1,2,…,𝑛𝑗 = 1 , 2 , \dots, 𝑛j=1,2,…,n

θj:=θj−α[1m∑i=1m(hθ(x(i))−y(i))xj(i)+λmθj]\theta_j := \theta_j - \alpha \left[ \frac{1}{m} \sum_{i=1}^{m} \left( h_\theta(x^{(i)}) - y^{(i)} \right) x_j^{(i)} + \frac{\lambda}{m}\theta_j \right]θj​:=θj​−α[m1​i=1∑m​(hθ​(x(i))−y(i))xj(i)​+mλ​θj​]

}

where:

hθ(x)=11+e−θTxh_\theta(x) = \frac{1}{1 + e^{-\theta^T x}}hθ​(x)=1+e−θTx1​

This essentially looks similar to linear regression, but with the logistic cost function.

Simplified Update Rule

You can also rewrite it as:

For j≥1j \ge 1j≥1:

θj:=θj(1−αλm)−α1m∑i=1m(hθ(x(i))−y(i))xj(i)\theta_j := \theta_j \left(1 - \alpha \frac{\lambda}{m}\right) - \alpha \frac{1}{m} \sum_{i=1}^{m} \left( h_\theta(x^{(i)}) - y^{(i)} \right) x_j^{(i)}θj​:=θj​(1−αmλ​)−αm1​i=1∑m​(hθ​(x(i))−y(i))xj(i)​

Intuition

Regularization:

  • Penalizes large weights
  • Reduces model complexity
  • Helps prevent overfitting
  • Encourages smoother decision boundaries

The regularized model is less likely to overfit compared to the non-regularized one.

Summary

Regularized logistic regression modifies:

  1. The cost function
  2. The gradient updates

Key rule:

  • Do not regularize θ0\theta_0θ0​
  • Regularize all other parameters
AI-Machine-Learning/9-Regularized-Logistic-Regression
Let's work together
+49 176-2019-2523
hiteshkrsahu@gmail.com
WhatsApp
Skype
Munich 🥨, Germany 🇩🇪, EU
Playstore
Hitesh Sahu's apps on Google Play Store
Need Help?
Let's Connect
Navigation
  Home/About
  Skills
  Work/Projects
  Lab/Experiments
  Contribution
  Awards
  Art/Sketches
  Thoughts
  Contact
Links
  Sitemap
  Legal Notice
  Privacy Policy

Made with

NextJS logo

NextJS by

hitesh Sahu

| © 2026 All rights reserved.