I have two small data sets (sizes $n_1 = 8$ and $n_2 = 21$) which *look* like they have significantly different variances. I know very little about the underlying distributions, but it's definitely not safe to assume they're normal or anything nice like that, which rules out the F-test. I'm aware I could use one of the other named tests (Bartlett, Brown–Forsythe, …) although I'm not currently quite sure what they assume about the population distribution, if anything.

Instead, I've tried my hand at using a permutation test: the null hypothesis is that the two datasets have equal variance, so relabel the data points at random and measure the difference in the two variances of the relabelled sets. Out of 1,000,000 attempts, fewer than 40,000 had a larger difference in variance (<4%).

Is it correct to say that, therefore, the difference in variance of the two data sets is significant at the $p < 0.05$ level? If so, is there a well-recognised name for this kind of test?

**Contents**hide

#### Best Answer

Your permutation test setup makes perfect sense – congratulations!

No, as far as I know, there is no established name for this kind of procedure. I'd recommend you describe it as you did here, and potentially refer to any introductory textbook for permutation tests. (You might want to add whether you ran a one-sided or a two-sided test, which is not entirely clear to me from your description. Either one, of course, can be implemented in your permutation test framework.)

### Similar Posts:

- Solved – Test for equal variance
- Solved – Difference between Randomization test and Permutation test
- Solved – Permutation test for variance
- Solved – How to know that 3 sets of data have the same/different distribution
- Solved – Which statistical test to use to test differences in multiple means (multiple populations)