Solved – Why aren’t power or log transformations taught much in machine learning

Machine learning (ML) uses linear and logistic regression techniques heavily. It also relies on feature engineering techniques (feature transform, kernel , etc).

Why is nothing about variable transformation (e.g.power transformation) mentioned in ML? (For example, I never hear about taking root or log to features, they usually just use polynomials or RBFs.) Likewise, why don't ML experts care about feature transformations for the dependent variable? (For example, I never hear about taking the log transformation of y; they just don't transform y.)

Maybe the question is not definitely, my really question is "is power transformation to variables not important in ML?"

The book Applied Predictive Modeling by Kuhn and Johnson is a highly-regarded practical machine learning book with a large section on variable transformation including Box-Cox. The authors claim that many machine learning algorithms work better if the features have symmetric and unimodal distributions. Transforming the features like this is an important part of "feature engineering".

Similar Posts:

Rate this post

Leave a Comment