# Solved – Asymptotic power

I have found the term "asymptotic power of a statistical test" only related to the Kolmogorov-Smirnov test (to be precise: asyptotic power = 1). What does this term acctually mean? In my opinion it should be someting like this: "if the alternative hypothesis is true, than for every significance level alpha there exists a sample size n that the selected test would reject the null hypothesis". Is "my" definition correct? According to "my defintion" the majority of classical tests (t-tset, …) should have the asymptotic power 1, not only KS test. Am I right? 😉

Contents

The definition above (a fixed alternative, sample size going to infinity) is more precisely related to the consistency (or not) of a hypothesis test. That is, a test is consistent against a fixed alternative if the power function approaches 1 at that alternative.

Asymptotic power is something different. As Joris remarked, with asymptotic power the alternatives \$theta_n\$ are changing, are converging to the null value \$theta_0\$ (on the order of \$sqrt n\$, say) while the sample size marches to infinity.

Under some regularity conditions (for example, the test statistic has a monotone likelihood ratio, is asymptotically normal, has asymptotic variance \$tau\$ continuous in \$theta\$, yada yada yada) if \$sqrt n(theta_n – theta_0)\$ goes to \$delta\$ then the power function goes to \$Phi(delta/tau – z_alpha)\$, where \$Phi\$ is the standard normal CDF. This last quantity is called the asymptotic power of just such a test.

See Lehmann's \$underline{mbox{Elements of Large Sample Theory}}\$ for discussion and worked out examples.

By the way, yes, the majority of classical tests are consistent.

Rate this post