Both regularization methods, ridge regression and LASSO, are implemented in the `glmnet`

R package. The workhorse of the `glmnet`

package is the eponymous `glmnet()`

function. Type `?glmnet`

to review the help page on that function. The `glmnet()`

function basically fits a generalized linear model via penalized maximum likelihood. The `alpha`

argument is the so-called *mixing parameter*, with \(0 \le \alpha \le 1\). If \(\alpha = 1\) the penalty term corresponds to the LASSO regularized regression and if \(\alpha = 0\) the penalty term corresponds to the ridge regularized regression. Values of \(\alpha\) between \(0\) and \(1\) are related to the **elastic net regularization**, which is discussed elsewhere.

It is important to realize that the degree of regularization depends on the regularization parameter \(\lambda\). Thus, it is useful to evaluate the regression function for a sequence of \(\lambda\). By default the `glmnet()`

evaluates a sequence of 100 \(\lambda\) values (note that in cases where no change occurs the function stops earlier). The number of \(\lambda\) may be set by the `nlambda`

argument and the sequence of \(\lambda\) may be specified by the `lambda`

argument to `glmnet()`

function call.

The `glmnet()`

function does not work with the formula notation, but uses matrices instead. At the begin of the section we assigned the response vector and the model matrix of the training set and the test set to the variables `y.train`

, `y.test`

, `X.train`

, and `y.train`

, respectively.

```
# load processed data set from previous section
load(url("https://userpage.fu-berlin.de/soga/300/30100_data_sets/dwd_30200.RData"))
# load helper functions from previous section
load(url("https://userpage.fu-berlin.de/soga/300/30100_data_sets/helper_functions_30200.RData"))
# load list object from previous section
load(url("https://userpage.fu-berlin.de/soga/300/30100_data_sets/model_outcome_II_30200.RData"))
```

We start with the ridge regression (`alpha = 0`

) and plot the results.

```
library(glmnet)
### RIDGE REGRESSION ###
m.ridge <- glmnet(X.train, y.train, alpha=0)
# plot model coefficients vs. shrinkage parameter lambda
plot(m.ridge, xvar = "lambda", label = TRUE)
```