I have a multivariate distribution (for which I know the parameters) that I simulate data from. I then fit several distributions to this simulated data using several different approaches (similar to MLE) and I want to compare how well my fitted distributions compare to the true distribution.

Most of the methods I have found so far regarding "comparing distributions" seem to really be comparing *samples* from two different processes. I'm curious if there is any research that provides a metric of how close two known *distributions* are to one another. My initial thought is to consider something like an integral of the absolute difference between the two distributions over the entire domain, but that seems computationally difficult to compute and maybe not the best approach.

**Contents**hide

#### Best Answer

You might find the notions of Hellinger and Kullback-Leibler divergence interesting: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence https://en.wikipedia.org/wiki/Hellinger_distance They can be applied to multivariate distributions, although actual computations can get difficult in some cases.