As you know lasso is a popular variable selection method of the form of

$

(y-xbeta)'(y-Xbeta)+lambda sum_i|beta_i|

$

the first is that it is possible to use optim() function in R to minimize this problem?

a sample code can be like

`x=matrix(rnorm(100),ncol=20) y=rowSums(x) f<-function(x,y,l,beta){ beta=as.matrix(beta) sum((y-x%*% beta)^2) +l*sum(abs(beta)) } optim(rep(0,ncol(x)),f,method='CG',x=x,y=y,l=1) `

Other questions are, 2) is the code above true? 3) how can I force the coefficients to be exactly zero?

PLEASE NOTICE THAT I DONT WANT TO USE PACKAGES LIKE LARS, GLMNET or … just optim or nlm functions.

Thanks

**Contents**hide

#### Best Answer

With standard algorithms for convex smooth optimization, like CG, gradient descent, etc, you tend to get results that are similar to lasso but the coefficients don't become exactly zero. The function being minimized isn't differentiable at zero so unless you hit zero exactly, you're likely to get all coefficients non-zero (but some very small, depending on your step size). That's why lasso and similar specialized algorithms are useful.

But if you insist on using these algorithms, you can truncate values, e.g., once you've got the "optimal" solution set all betas under 1e-9 or something to zero.

### Similar Posts:

- Solved – Relationship between LASSO and MAP
- Solved – How is the lasso orthogonal design case solution derived
- Solved – How is the lasso orthogonal design case solution derived
- Solved – KKT versus unconstrained formulation of lasso regression
- Solved – KKT versus unconstrained formulation of lasso regression