Solved – Prove that the mean value of a convolution is the sum of the mean values of its individual parts

Prove that the mean value of a density function convolution, if the mean values exist (they may not for fat-tailed distributions) is the sum of the mean values of the density functions used to make that convolution.

This is a simple one, but, I would like to see what the compact formal notation looks like for it.

Like many demonstrations involving convolutions, it comes down to applying Fubini's Theorem.

Let's establish notation and assumptions.

Let $f$ and $g$ be integrable real-valued functions defined on $mathbb{R}^n$ having unit integrals (with respect to Lebesgue measure): that is,

$$1=int_{mathbb{R}^n} f(x) dx = int_{mathbb{R}^n} g(x) dx.$$

(For convenience, let's drop the "$mathbb{R}^n$" subscript, because all integrals will be evaluated over this entire space.)

The convolution $fstar g$ is the function defined by

$$(fstar g)(x) = int f(x-y) g(y) dy.$$

(This is guaranteed to exist when $f$ and $g$ are both bounded or whenever $f$ and $g$ are both probability density functions.)

The mean of any integrable function is

$$E[f] = int x f(x) dx.$$

It might be infinite or undefined.

Solution

The question asks to compute $E[fstar g]$ (in the special case where $f$ and $g$ are nonnegative–but this assumption doesn't matter). Apply the definitions of $E$ and $star$ to obtain a double integral; switch the order of integration according to Fubini's Theorem (which requires assuming $E[fstar g]$ is finite), then substitute $x-yto u$ and exploit linearity of integration (which is a basic property established immediately whenever any theory of integration is developed). The result will appear because both $f$ and $g$ have unit integrals.


For those who want to see the details, here they are:

$$eqalign{ E[fstar g] &= int x (fstar g)(x) dx &text{Definition of }E\ &= int x left(int f(x-y) g(y) dyright) dx &text{Definition of convolution}\ &= int g(y) left(int x f(x-y) dxright) dy &text{Fubini}\ &= int g(y) left(int (x-y)f(x-y) + yf(x-y) dxright) dy&text{Expand }x=(x-y)+y \ &= int g(y) left(int (x-y)f(x-y) dx + yint f(x-y) dxright) dy &text{Linearity of integration}\ &= int g(y) left(int u f(u) du + y int f(u) duright) dy &text{Substitution } x-yto u\ &= int g(y) (E[f] + y(1)) dy &text{Assumptions about }f\ &= E[f]int g(y) dy + int y g(y) dy &text{Linearity of integration}\ &= E[f](1) + E[g] &text{Assumptions about }g\ &= E[f] + E[g]. }$$

These calculations are legitimate provided all three expectations $E[fstar g], E[f], E[g]$ are defined and finite. Fubini's Theorem requires only the finiteness of $E[fstar g],$ but the steps at the end (involving linearity) also need the finiteness of the other two expectations.

Similar Posts:

Rate this post

Leave a Comment