Solved – Normalizing SVM predications to [0,1]

I have trained an linear SVM which takes a pair of objects, computes features and is expected to learn a semantic similarity function between objects(we can say that it predicts whether the two objects are similar enough that they should be merged or not). The problem I am facing is that the predictions can be … Read more

Solved – How to transform a non-linear relationship to make it linear

I have some data which follow a nonlinear relationship like that displayed in the plot below. The non-linear data do not come directly from the explicit function written in the code below! x1 <- seq(-1,-0.0001,len=500) x2 <- seq(0.0001,1,len=500) df <- data.frame(x=c(x1,x2),y=c(1-0.0001^x1,0.0001^(-x2)-1)) plot(df[,1],df[,2],type="p") Given that there are negative values in both x and y, how do … Read more

Solved – How to transform a non-linear relationship to make it linear

I have some data which follow a nonlinear relationship like that displayed in the plot below. The non-linear data do not come directly from the explicit function written in the code below! x1 <- seq(-1,-0.0001,len=500) x2 <- seq(0.0001,1,len=500) df <- data.frame(x=c(x1,x2),y=c(1-0.0001^x1,0.0001^(-x2)-1)) plot(df[,1],df[,2],type="p") Given that there are negative values in both x and y, how do … Read more

Solved – an integrated time series

In this question a commenter says that "differencing a series that is not integrated is certainty problematic from the statistical perspective". What is an integrated time series, and why is differencing a series that is not integrated problematic? Best Answer Consider the first difference $Delta u_t$ of a linear process (a fairly general way of … Read more

Solved – Transformations not correcting significant skews

I am running an experiment which measured using Likert Scales and I have 6 variables out of 24 which either have significant skewness or kurtosis. These are mixed with some positive and some negative values. I transformed my data using log transformations, Square root transformations, reciprocal transformations and reverse score transformations, but this did not … Read more

Solved – Why aren’t power or log transformations taught much in machine learning

Machine learning (ML) uses linear and logistic regression techniques heavily. It also relies on feature engineering techniques (feature transform, kernel , etc). Why is nothing about variable transformation (e.g.power transformation) mentioned in ML? (For example, I never hear about taking root or log to features, they usually just use polynomials or RBFs.) Likewise, why don't … Read more

Solved – Why aren’t power or log transformations taught much in machine learning

Machine learning (ML) uses linear and logistic regression techniques heavily. It also relies on feature engineering techniques (feature transform, kernel , etc). Why is nothing about variable transformation (e.g.power transformation) mentioned in ML? (For example, I never hear about taking root or log to features, they usually just use polynomials or RBFs.) Likewise, why don't … Read more

Solved – Why do we log transform response ratios

In meta-analysis, it seems a standard practice to take the natural log of the response ratio before evaluating it. My question is why? That is, if I have a treatment mean (Xe) and a control mean of (Xc), and the response ratio (RR) is defined as Xe/Xc, why would I take the natural logarithm of … Read more

Solved – Why do we log transform response ratios

In meta-analysis, it seems a standard practice to take the natural log of the response ratio before evaluating it. My question is why? That is, if I have a treatment mean (Xe) and a control mean of (Xc), and the response ratio (RR) is defined as Xe/Xc, why would I take the natural logarithm of … Read more