Table of Contents

Back to presentation schedule

Laila Yasmin

Time 1:20-1:40PM, Monday November 27th
Room CLE A311

Title

The Lasso, Standard Errors and the Bayesian Lassos via Hierarchical Models

Abstract

Within the context of multiple linear regression an important problem is that of selecting important variables in the model. One approach to doing so is through the Lasso, which provides a sparse estimator of the regression coefficients and thus provides an approach for simultaneous point estimation and model selection. The Lasso minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. When the linear regression parameters have independent Laplace (double-exponential) priors, the Lasso can be interpreted as a posterior mode associated with a Bayesian model. We use the hierarchical Bayesian formulation of the Lasso with implementation based on Markov chain Monte Carlo to produce valid standard errors, whereas, getting standard errors for the usual frequentist Lasso can be problematic. The Bayesian Elastic net uses both $L_1$ and $L_2$ penalties simultaneously and avoids the “double shrinkage problem” of the parameters. Both Lasso and Elastic net are useful when the sample size in smaller than the number of predictors. We compare Lasso, Bayesian Lasso, and Bayesian elastic net using simulation and real data analysis.