- Our new problem is to minimize the cost function given this added constraint.
- We don’t want the model to memorize the training dataset, we want a model that generalizes well to new, unseen data.
- In more specific terms, we can think of regularization as adding (or increasing the) bias if our model suffers from (high) variance (i.e., it overfits the training data).
- A discussion on regularization in logistic regression, and how its usage plays into better model fit and generalization.
- If we regularize the cost function (e.g., via L2 regularization), we add an additional to our cost function (J) that increases as the value of your parameter weights (w) increase; keep in mind that the regularization we add a new hyperparameter, lambda, to control the regularization strength.
Read the full article, click here.
@kdnuggets: “Regularization in Logistic Regression: Better Fit & Generalization? #MachineLearning @rasbt”
A discussion on regularization in logistic regression, and how its usage plays into better model fit and generalization.
Regularization in Logistic Regression: Better Fit and Better Generalization?