I know that for uniformly distributed random variables $X_1,X_2,dots,X_n$ $in mathcal{R}$, the p.d.f. is given by:

$f(x_i) = 1/θ$ ; if $0≤x_i≤θ$

$f(x) = 0$ ; otherwise

If the uniformly distributed random variables are arranged in the following order

$0≤X_1≤X_2≤X_3dots ≤X_n≤θ$,

I understand that the likelihood function is given by

$L(θ)=prod_{i=1}^{n}f(x_i)=θ^{−n}$

The log-likelihood is:

$ln L(θ)=−nln(θ)$

Setting its derivative with respect to parameter $theta$ to zero, we get:

$frac{mathrm d}{mathrm dtheta}ln L(theta)=-ntheta$

which is $< 0$ for $θ > 0$

Hence, $L(θ)$ is a decreasing function and it is maximized at $θ = X_{(n)}$

The maximum likelihood estimate is thus

$hat{θ} = X_{(n)}$

My question is:—what if I find the supremum to solve this?

**Contents**hide

#### Best Answer

The result is correct, but the reasoning is somewhat inaccurate. You need to keep track of the property that the density is zero outside $[0,theta]$. This implies that the likelihood is zero to the left of the sample maximum, and jumps to $theta^n$ in the maximum. It indeed decreases afterwards, so that the maximum is the MLE.

This also entails that the likelihood is not differentiable in this point, so that finding the MLE via the "canonical" route of the score function is not the way to go here.

A more detailed formal derivation is, e.g., given here