Solved – Maximum Likelihood estimator for family of binomial distributions

For the below example, I am considering Heads as a success and Tails as a failure, when I toss a coin.

(Ex: The first row in the the below tables says, when I tossed the coin 10 times I got 3 Successes and the probability of success is 0.3).

Binomial Distribution Example
enter image description here

Now, considering the fact that the Probability of successes might change by increase in trials, I know the maximum likelihood estimator of binomial distribution is Number of Successes/ Total Number of Trials. I feel calculating the MLE for this kind of data is not that straightforward, COuld someone tell me if I am missing something?

P.S: This is a research based question. Any help would be appreciated. Thanks in advance.

Since per clarification comment, we are tossing the same coin, then in each single Bernoulli trial the probability is the same, $p$, it is not affected by number of trials (assuming also an unbiased way of tossing). If moreover we can assume that all Bernoulli trials are independent, and that each sub-sample consists of different tosses (i.e. the $n=20$ sample does not contain the $10$ tosses of the $n=10$ sample), then, viewed together, we have an independent but non-identically distributed sample of realizations from $10$ Binomials that have the same unknown probability parameter, but different "number of trials" parameters (although known and deterministic), $S_i(n_i,p), i=1,2,3,…,10$, corresponding to $n$-parameters $10,20,30,…,100$.

Then the joint likelihood of this sample is (ignoring constants that do not include the unknown parameter)

$$L propto prod_{i=1}^{10}p^{k_i}(1-p)^{n_i-k_i} = p^{sum k_i}(1-p)^{sum (n_i-k_i)}$$

where $k_i$'s are the obtained number of successes

So the log-likelihood is

$$ln L = left(sum_{i=1}^{10}k_iright)ln p + left(sum_{i=1}^{10}(n_i-k_i)right)ln (1-p)$$

You should get

$$hat p = frac {sum_{i=1}^{10}k_i}{sum_{i=1}^{10}n_i}$$

as should be expected, since you pooled i.i.d. Bernoulli draws, and so the estimator treated them as $sum_{i=1}^{10}n_i$ draws from a Bernoulli $(p)$ RV in which we had $sum_{i=1}^{10}k_i$ successes.

Similar Posts:

Rate this post

Leave a Comment