I am applying some statistical tests using python's scipy.stats library to some datasets that I have (taken in pairs), testing whether they both come from the same unknown distribution.
I don't have much background in statistics, so forgive me for the following questions. I was looking at the documentation and I have some doubts.
- scipy.stats.mannwhitneyu: It returns a "One-sided p-value assuming a asymptotic normal distribution" . Why is it assuming a normal distribution? Should't this test work on any underlying distribution?
- scipy.stats.ttest_ind: This test assumes that the populations have identical variances. In my case I can compute the sample variance, so once I do should I apply the test only if it doesn't differ by a certain threshold (which one?)? Interestingly, this is only statistical test rejected only a few of my hypothesis, while most of the other ones rejected some 80% of them.
- As a matter of fact, I want to test whether the distribution of one data set is significantly larger than that of all other data sets put together. Should I use a one-sided or a two-sided test here? This may sound silly, but in the case of a one-sided test, how I can test for one distribution being significantly greater than as opposed to significantly smaller? I coudln't find anything in scipy documentation about this. Swapping the arguments yields the same result.
I will answer your bullets with bullets of my own in the same order:
I think the sentence is referring to the large sample (asymptotic) distribution of the test statistic, not the data. As you can see here, the Mann-Whitney U test statistic has an approximate normal distribution when the sample size is large.
In order to assume equal variance, you may consider doing sort of diagnostic check about whether or not the variances are equal. It is common practice to operate under the equal variance assumption unless a hypothesis test rejects that hypothesis – Levene's Test, which tests the null hypothesis that the variances are equal – is commonly used for this and has the nice property that it is robust to non-normality of the data . When the variances truly are equal you will sacrifice statistical power by not assuming equal variance, so it's good to do this whenever you can. However, you should note that if you have a small sample size, you may have little power to detect inhomogeneity of variance so if the sample variances are very different from each other you should consider not assuming equal variance, even if you fail to reject the null in Levene's Test.
If by "I want to test whether the distribution of one data set is significantly larger" you mean that one mean is larger than the other, then this would be a one-sided test. If you're testing an alternative hypothesis of the form $mu_1 > mu_2$, then you will look at the area to the right of your observed test statistic rather than to the left, which is what distinguishes it from a "less than" one-sided test. Of course, if you interchange the roles of the two samples and switch the hypothesis to a "less than" hypothesis, you will get the same results, since everything is less reversed. If you're doing a two-sided test, interchanging the roles of the two samples should give you the exact same $p$-value.