A pdf is usually written as $f(x|theta)$, where the lowercase $x$ is treated as a realization or outcome of the random variable $X$ which has that pdf. Similarly, a cdf is written as $F_X(x)$, which has the meaning $P(X<x)$. However, in some circumstances, such as the definition of the score function and this derivation that the cdf is uniformly distributed, it appears that the random variable $X$ is being plugged into its own pdf/cdf; by doing so, we get a *new random variable* $Y=f(X|theta)$ or $Z=F_X(X)$. I don't think we can call this a pdf or cdf anymore since it is now a random variable itself, and in the latter case, the "interpretation" $F_X(X)=P(X<X)$ seems like nonsense to me.

Additionally, in the latter case above, I am not sure I understand the statement "the cdf of a random variable follows a uniform distribution". The cdf is a function, not a random variable, and therefore *doesn't have* a distribution. Rather, what has a uniform distribution is the random variable transformed using the function that represents its own cdf, but I don't see why this transformation is meaningful. The same goes for the score function, where we are plugging a random variable into the function that represents its own log-likelihood.

I have been wracking my brain for weeks trying to come up an intuitive meaning behind these transformations, but I am stuck. Any insight would be greatly appreciated!

**Contents**hide

#### Best Answer

Like you say, any (measurable) function of a random variable is itself a random variable. It is easier to just think of $f(x)$ and $F(x)$ as "any old function". They just happen to have some nice properties. For instance, if $X$ is a standard exponential RV, then there's nothing particularly strange about the random variable $$Y = 1 – e^{-X}$$ It just so happens that $Y=F_X(X)$. The fact that $Y$ has an Uniform distribution (given that $X$ is a continuous RV) can be seen for the general case by deriving the CDF of $Y$.

begin{align*} F_Y(y) &= P(Y leq y) \ &= P(F_X(X) leq y) \ &= P(X leq F^{-1}_X(y)) \ &= F_X(F^{-1}_X(y)) \ &= y end{align*}

Which is clearly the CDF of a $U(0,1)$ random variable. *Note: This version of the proof assumes that $F_X(x)$ is strictly increasing and continuous, but it's not too much harder to show a more general version.*

### Similar Posts:

- Solved – the intuitive meaning behind plugging a random variable into its own pdf or cdf
- Solved – the intuitive meaning behind plugging a random variable into its own pdf or cdf
- Solved – Bayesian MCMC Metropolis-Hastings with uniform prior
- Solved – What does a subscript on a probability represent
- Solved – Jeffreys prior for continuous uniform distribution