Solved – Recovering raw coefficients and variances from orthogonal polynomial regression

It seems that if I have a regression model such as $y_i sim beta_0 + beta_1 x_i+beta_2 x_i^2 +beta_3 x_i^3$ I can either fit a raw polynomial and get unreliable results or fit an orthogonal polynomial and get coefficients that don't have a direct physical interpretation (e.g. I cannot use them to find the locations of the extrema on the original scale). Seems like I should be able to have the best of both worlds and be able to transform the fitted orthogonal coefficients and their variances back to the raw scale. I've taken a graduate course in applied linear regression (using Kutner, 5ed) and I looked through the polynomial regression chapter in Draper (3ed, referred to by Kutner) but found no discussion of how to do this. The help text for the poly() function in R does not. Nor have I found anything in my web searching, including here. Is reconstructing raw coefficients (and obtaining their variances) from coefficients fitted to an orthogonal polynomial…

  1. impossible to do and I'm wasting my time.
  2. maybe possible but not known how in the general case.
  3. possible but not discussed because "who would want to?"
  4. possible but not discussed because "it's obvious".

If the answer is 3 or 4, I would be very grateful if someone would have the patience to explain how to do this or point to a source that does so. If it's 1 or 2, I'd still be curious to know what the obstacle is. Thank you very much for reading this, and I apologize in advance if I'm overlooking something obvious.

Yes, it's possible.

Let $z_1, z_2, z_3$ be the non-constant parts of the orthogonal polynomials computed from the $x_i$. (Each is a column vector.) Regressing these against the $x_i$ must give a perfect fit. You can perform this with the software even when it does not document its procedures to compute orthogonal polynomials. The regression of $z_j$ yields coefficients $gamma_{ij}$ for which

$$z_{ij} = gamma_{j0} + x_igamma_{j1} + x_i^2gamma_{j2} + x_i^3gamma_{j3}.$$

The result is a $4times 4$ matrix $Gamma$ that, upon right multiplication, converts the design matrix $X=pmatrix{1;&x;&x^2;&x^3}$ into $$Z=pmatrix{1;&z_1;&z_2;&z_3} = XGamma.tag{1}$$

After fitting the model

$$mathbb{E}(Y) = Zbeta$$

and obtaining estimated coefficients $hatbeta$ (a four-element column vector), you may substitute $(1)$ to obtain

$$hat Y = Zhatbeta = (XGamma)hatbeta = X(Gammahatbeta).$$

Therefore $Gammahatbeta$ is the estimated coefficient vector for the model in terms of the original (raw, un-orthogonalized) powers of $x$.

The following R code illustrates these procedures and tests them with synthetic data.

n <- 10        # Number of observations d <- 3         # Degree # # Synthesize a regressor, its powers, and orthogonal polynomials thereof. # x <- rnorm(n) x.p <- outer(x, 0:d, `^`); colnames(x.p) <- c("Intercept", paste0("x.", 1:d)) z <- poly(x, d) # # Compute the orthogonal polynomials in terms of the powers via OLS. # xform <- lm(cbind(1, z) ~ x.p-1) gamma <- coef(xform) # # Verify the transformation: all components should be tiny, certainly # infinitesimal compared to 1. # if (!all.equal(as.vector(1 + crossprod(x.p %*% gamma - cbind(1,z)) - 1),      rep(0, (d+1)^2)))   warning("Transformation is inaccurate.") # # Fit the model with orthogonal polynomials. # y <- x + rnorm(n) fit <- lm(y ~ z) #summary(fit) # # As a check, fit the model with raw powers. # fit.p <- lm(y ~ .-1, data.frame(x.p)) #summary(fit.p) # # Compare the results. # (rbind(Computed=as.vector(gamma %*% coef(fit)), Fit=coef(fit.p)))  if (!all.equal(as.vector(gamma %*% coef(fit)), as.vector(coef(fit.p))))   warning("Results were not the same.") 

Similar Posts:

Rate this post

Leave a Comment