I am looking to understand the printed coefficients from logistic regression for my classification problem. I have 4 classes (certain, likely, possible and unlikely, for a gene to be related to a disease). I run logistic regression like this:

`logreg = LogisticRegression(penalty='l1', C=0.5, max_iter=500, solver='liblinear', multi_class='auto') logreg.fit(X_train, Y_train) print('All LR feature weights:') coef=logreg.coef_[0] intercept=logreg.intercept_ classes=logreg.classes_ print(coef) print(intercept) print(classes) `

The coefficent output looks like this:

`All LR feature weights: [ 0.44250477 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.53959074 0. 0. 0. 0. 0. 0. 0. 0.43750937 0. 0. 0. 0. 0. 0.2460205 0.19828843 0. -0.01036487 0. 0. 0. 0. 0. 0. ] `

I understand that negative weights relates to the ability to class negative samples and vice versa with positive samples, but how do I interpret this with 4 classes? Do I need to calculat the coefficents in another way? Any help wuld be appreciated.

**Contents**hide

#### Best Answer

Someone told me to move to this stackexchange from asking on stackoverflow, however my answer has actually turned out to be a coding one: I have my labels encoded as 0-3 and:

`coef=logreg.coef_[0] `

is then printing out the coefficients only for label 0

I need:

`coef_certain=logreg.coef_[0] coef_likely=logreg.coef_[1] coef_possible_=logreg.coef_[2] coef_unlikely=logreg.coef_[3] `

or just

`coef_certain=logreg.coef_ `

Now I can view all feature weights used for every class for interpretation.

### Similar Posts:

- Solved – Logistic-Regression: Prior correction at test time
- Solved – Logistic-Regression: Prior correction at test time
- Solved – Interpreting coefficients for multinomial regression with >2 classes?
- Solved – How to do one-vs-one classification for logistic regression
- Solved – Why are the residuals computed manually different from those computed by R