Solved – Does the estimated overdispersion parameter of Negative Binomial depend on mean

Negative Binomial distribution can be parameterized using mean, $mu$, and overdispersion $psi$, so that the variance of NB is $mu + frac{mu^2}{psi}$. We know there is no analytical solution for estimating $psi$.

I understand the variance of NB depends on mean $mu$. But does the estimated $psi$ also depend on mean. Say using MLE or any other commonly used method.

I'll begin by noting that there are actually quite a few different estimators of the dispersion parameter of a negative binomial out there, and that my discussion here is limited by those estimators that I am familar with. To answer your question:

In the maximum likelihood case – the answer is yes. For other cases the answer may be no. I'll start with a discussion of the maximum likelihood case. It is not too hard to see that the maximum likelihood estimate of $psi$ will depend on $mu$. Just take the first derivative with respect to $psi$ and note that even though the first derivative has no closed form solution at zero, it's value still depends on $mu$.

Another method that can estimate $psi$ is known as conditional maximum likelihood. The sum of iid Negative Binomial data is also negative binomial. It can further be shown that the sum is a sufficient statistic for $mu$ so that we can form an exact conditional likelihood for $psi$ independent of the value of $mu$. For more details see section 4 in:

http://biostatistics.oxfordjournals.org/content/9/2/321.full.pdf+html

The conditional maximum likelihood estimate has been shown to be less biased than the maximum likelihood estimate whereas the maximum likelihood underestimates the true dispersion. This makes sense intuitively since, just as in the normal case, the maximum likelihood estimate of the dispersion parameter depends on the estimated mean, but makes no adjustments to correct for the fact that the mean is estimated from the same data.

So, I would recommend using conditional maximum likelihood over MLE's in small sample problems or do a search for some bias correction technique on the maximum likelihood estimates. For large samples though you might just as well use the maximum likelihood estimates since it won't make much of a difference though.

Similar Posts:

Rate this post

Leave a Comment