Regularized Linear Regression
Recommender Systems: Collaborative Filtering, Content-Based Filtering, and Hybrid Approaches
⚖️ Regularized Logistic Regression
Regularization helps prevent overfitting by penalizing large weights.
Compared to the non-regularized model, the regularized version produces smoother decision boundaries.
Cost Function (Without Regularization)
Recall the logistic regression cost function:
Cost Function With Regularization
We add a L2 penalty term: penalty term:
Regularization term
The Regularization term is:
The parameter vector contains:
Explicitly excludes the bias term .
- Regularization runs from to
- So is not penalized
Why Exclude ?
The bias term controls the decision boundary shift.
We do not want to shrink it toward zero.
Only the other parameters are regularized.
Gradient Descent With Regularization
repeat until convergence: {
For (bias):
No regularization term here for :
For :
Update for
}
where:
This essentially looks similar to linear regression, but with the logistic cost function.
Simplified Update Rule
You can also rewrite it as:
For :
Intuition
Regularization:
- Penalizes large weights
- Reduces model complexity
- Helps prevent overfitting
- Encourages smoother decision boundaries
The regularized model is less likely to overfit compared to the non-regularized one.
Summary
Regularized logistic regression modifies:
- The cost function
- The gradient updates
Key rule:
- Do not regularize
- Regularize all other parameters
