# Solved – Putting limits on estimated coefficient values

I recently came across an example of using eviews to put non-linear/convex limits
on estimated coeficients on OLS.

I could track down a link to a forum describing how this is done (here)…

But I don't actually know anything more about how these models are estimated in practice. Has anyone a link to a paper/book/source code/explanation on how this is implemented in practice?

Contents

# Question update:

Using cardinal's answer to my orginal question i have been able to solve this problem using $$verb+optim()+$$ –just replacing $$beta$$ by $$1-exp(beta)$$ in the minus log likelihood equation and minimizing itteratively. Now i would like to solve it using $$verb+nls()+$$. So for example, in the following case:

``library(MASS) #coefficients bounded by 1 n<-100 p<-2 x<-mvrnorm(n,rep(0,p),diag(p)) t1<-4 t2<-1.5 t3<--2 y<-cbind(1,x)%*%c(t1,t2,t3) y<-y+rnorm(n,0,2) ``

i get $$1-exp(hat{t}_2)=1$$ and $$1-exp(hat{t}_3)=-2$$. So my question now is can this be done with $$verb+nls()+$$?

This is simply changing OLS problem to NLS one. Let us say we have the model

\$\$y_i=alpha_0+alpha_1x_i+varepsilon_i\$\$

Say we want to make \$alpha_1\$ positive, then rewrite this equation as

\$\$y_i=alpha_0+exp(beta_1)x_i+varepsilon_i\$\$

Algebraically this is the same, we only have different reparametrisation. Instead of \$(alpha_0,alpha_1)in mathbb{R}timesmathbb{R}_+\$ we have now \$(alpha_0,beta_1)inmathbb{R}^2\$ and we can estimate the second equation using non-linear least squares.

Here is how we can achieve this in R:

``> x<-rnorm(100) > y<-1+2*x+rnorm(100)/5 > lm(y~x)  Call: lm(formula = y ~ x)  Coefficients: (Intercept)            x        0.9794       1.9895    > nls(y~a+exp(b)*x,start=list(a=0.9,b=log(2)-0.3)) Nonlinear regression model   model:  y ~ a + exp(b) * x     data:  parent.frame()       a      b  0.9794 0.6879   residual sum-of-squares: 3.694  Number of iterations to convergence: 3  Achieved convergence tolerance: 6.09e-06  > exp(0.6879) [1] 1.989533 ``

Of course since you convert OLS problem to NLS you need to supply starting values. In Eviews it is done automatically. Note however that Eviews may happily blurt out the answer even if it is not correct, i.e. convergence was not achieved. This at least was my personal experience in working with it.

Update The last example in the link is not correct. Suppose you have the constraints \$alpha_0+alpha_1=1\$ and \$alpha_0,alpha_1>0\$, then the problem actually has only one parameter:

\$\$y=alpha+(1-alpha)x+varepsilon\$\$

Then you can use reparametrisation:

\$\$alpha=frac{beta}{1+exp(beta)}\$\$

Here is the implementation in R:

``> y<-0.3+0.7*x+rnorm(100)/5 > lm(y~x)  Call: lm(formula = y ~ x)  Coefficients: (Intercept)            x        0.3175       0.7402    > nls(y~exp(a)/(exp(a)+1)+(1/(exp(a)+1))*x,start=list(a=-0.5)) Nonlinear regression model   model:  y ~ exp(a)/(exp(a) + 1) + (1/(exp(a) + 1)) * x     data:  parent.frame()        a  -0.8876   residual sum-of-squares: 4.073  Number of iterations to convergence: 3  Achieved convergence tolerance: 4.069e-09  > exp(-0.8876) [1] 0.4116425 ``

Note however the discrepancy between the generated value and the resulting coefficient. Also if you will try to use the reparametrisation suggested in the link:

\$\$alpha_0=frac{exp(beta_0)}{exp(beta_0)+exp(beta_1)}\$\$ \$\$alpha_1=frac{exp(beta_1)}{exp(beta_0)+exp(beta_1)}\$\$

you will have an ill posed NLS problem and `nls` will complain about singular gradients.

Rate this post