Chapter 15 Regularization and Feature Selection
We assume you have loaded the following packages:
## ModuleNotFoundError: No module named 'pandas'
## ModuleNotFoundError: No module named 'matplotlib'
Below we load more as we introduce more.
For replicability, we also set the seed:
15.2 Forward selection
There are several solutions to this problem. A popular algorithm is forward selection where one first picks the best 1-feature model, thereafter tries adding all remaining features one-by-one to build the best two-feature model, and thereafter the best three-feature model, and so on, until the model performance starts to deteriorate.
Let us play through this algorithm with the example data we created:
## NameError: name 'm' is not defined
## NameError: name 'm' is not defined
## NameError: name 'm' is not defined
## NameError: name 'm' is not defined
We use column_stack to create matrices out of individual vectors
(remember: the fit method
requires X to be a matrix, not vector!) and thereafter fit the model
and output it’s \(R^2\) on the training data. Remember: the score
method computes \(R^2\) in case of linear regression. Also be aware
that models of similar number of features can be compared with just
training \(R^2\), cross-validation is not needed.
Out of these three features \(\boldsymbol{x}_2\) gives
the best \(R^2\).
Next, we’ll add the second feature. As \(\boldsymbol{x}_1\) is now taken, we only have to test \(\boldsymbol{x}_1\) and \(\boldsymbol{x}_3\) and see if any of these improves our model:
## NameError: name 'm' is not defined
## NameError: name 'm' is not defined
The feature combination \(\boldsymbol{x}_2\) and \(\boldsymbol{x}_3\) gave us a slightly better result.
But is the best two-feature model better than the best one-feature model? As these models contain different number of features, we need to compute cross-validated \(R^2\) instead. So we test our best single-feature model and the best two-feature model using cross-validation:
## NameError: name 'cross_val_score' is not defined
## NameError: name 'cross_val_score' is not defined
Not surprisingly, both models overfit, but the two-feature model is noticeably better than the single-feature model.
Finally, we can also evaluate the 3-feature model. As we only have 3 features, there is only a single model:
## NameError: name 'cross_val_score' is not defined
Hence the three-feature model turned out worse than the two-feature model. We conclude that based on forward-selection, the best model is \[\begin{equation*} y_{i} = \beta_{0} + \beta_{2} x_{2i} + \beta_{3} x_{3i} + e_{i}. \end{equation*}\]
15.3 Ridge and Lasso regression
Ridge and Lasso are methods that are related to forward selection. These methods penalize large \(\beta\) values and hence suppress or eliminate correlated variables. These do not need looping over different combinations of variables like forward selection, however, one normally has to loop over the penalty parameter alpha to find the optimal value.
Both methods are living in sklearn.linear_regression.
We demonstrate this with ridge regression below:
## ModuleNotFoundError: No module named 'sklearn'
## NameError: name 'Ridge' is not defined
## NameError: name 'cross_val_score' is not defined
## NameError: name 'Ridge' is not defined
## NameError: name 'cross_val_score' is not defined
As evident from the above, penalty values 1 and 0.3 give virtually equal results. The cross-validated \(R^2\) is also comparable to what forward-selection suggested.
Lasso works in an analogous fashion as ridge, but as it’s penalty is not globally differentiable (it is \(L_1\) penalty), then you may run into more problems with convergence.
Both methods have more options, in particular they can normalize the features before fitting.