Say I have predictor array `x:(n,px)`

and a predicted array `y:(n, py)`

. What would be the best way in Python to calculate all regression coefficients (linear) from `x`

to each dimension of `y`

(`1...py`

)? The output of the whole thing would be a matrix `(py, px)`

(for each output, `px`

parameters). I could easy iterate over output dimensions, however that would be inefficient as I will recalculate the pseudo inverse matrix of `x`

. Is there any efficient implementation out there?

**Contents**hide

#### Best Answer

There is a `sklearn`

library in python, which (among others) implements Ordinary Least Squares Linear Regression

http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html

Sample usage:

`from sklearn import linear_model #creating a regression object regr = linear_model.LinearRegression() #runnin OLS on your data, assuming that you already have arrays x and y regr.fit( x, y ) #displaying coefficients matrix print regr.coef_ `

### Similar Posts:

- Solved – How to create Hinge loss function in python from scratch
- Solved – Using and interpreting $k$-fold cross validation for regression
- Solved – How to make predictions from Lasso coefficients
- Solved – How to make predictions from Lasso coefficients
- Solved – Polynomial regression seems to give different coefficients depending on Python or R