# Solved – Standard measure of clumpiness

I have a lot of data and I want to do something which seems very simple. In this large set of data, I am interested in how much a specific element clumps together. Let's say my data is an ordered set like this: {A,C,B,D,A,Z,T,C…}. Let's say I want to know whether the A's tend to be found right next to each other, as opposed to being randomly (or more evenly) distributed throughout the set. This is the property I am calling "clumpiness".

Now, is there some simple measurement of data "clumpiness"? That is, some statistic that will tell me how far from randomly distributed the As are? And if there isn't a simple way to do this, what would the hard way be, roughly? Any pointers greatly appreciated!

Contents

As an example, suppose you have an ordered set in which each position has an equal probability of being any of the lowercase letters in the alphabet. In this case I will make the ordered set contain \$1000\$ elements.

``# generate a possible sequence of letters s <- sample(x = letters, size = 1000, replace = TRUE) ``

It turns out that if each of the positions of the ordered set follows a uniform distribution over the lowercase letters of the alphabet, then the distance between two occurrences of the same letter follows a geometric distribution with parameter \$p=1/26\$. In light of this information, let's compute the distance between consecutive occurrences of the same letter.

``# find the distance between occurences of the same letters d <- vector(mode = 'list', length = length(unique(letters))) for(i in 1:length(unique(letters))) {     d[[i]] <- diff(which(s == letters[i])) } d.flat <- unlist(x = d) ``

Let's look at a histogram of the distances between occurrences of the same letter and compare it to the probability mass function associated with the geometric distribution mentioned above.

``hist(x = d.flat, prob = TRUE, main = 'Histogram of Distances', xlab = 'Distance',      ylab = 'Probability') x <- range(d.flat) x <- x[1]:x[2] y <- dgeom(x = x - 1, prob = 1/26) points(x = x, y = y, pch = '.', col = 'red', cex = 2) ``

The red dots represent the actual probability mass function of the distance we would expect if each of the positions of the ordered set followed a uniform distribution over the letters and the bars of the histogram represent the empirical probability mass function of the distance associated with the ordered set.

Hopefully the image above is convincing that the geometric distribution is appropriate.

Again, if each position of the ordered set follows a uniform distribution over the letters, we would expect the distance between occurrences of the same letter to follow a geometric distribution with parameter \$p=1/26\$. So how similar are the expected distribution of the distances and the empirical distribution of the differences? The Bhattacharyya Distance between two discrete distributions is \$0\$ when the distributions are exactly the same and tends to \$infty\$ as the distributions become increasingly different.

How does `d.flat` from above compare to the expected geometric distribution in terms of Bhattacharyya Distance?

``b.dist <- 0 for(i in x) {     b.dist <- b.dist + sqrt((sum(d.flat == i) / length(d.flat)) * dgeom(x = i - 1,               prob = 1/26)) } b.dist <- -1 * log(x = b.dist) ``

The Bhattacharyya Distance between the expected geometric distribution and the emprirical distribution of the distances is about \$0.026\$, which is fairly close to \$0\$.

EDIT:

Rather than simply stating that the Bhattacharyya Distance observed above (\$0.026\$) is fairly close to \$0\$, I think this is a good example of when simulation comes in handy. The question now is the following: How does the Bhattacharyya Distance observed above compare to typical Bhattacharyya Distances observed if each position of the ordered set is uniform over the letters? Let's generate \$10,000\$ such ordered sets and compute each of their Bhattacharyya Distances from the expected geometric distribution.

``gen.bhat <- function(set, size) {     new.seq <- sample(x = set, size = size, replace = TRUE)     d <- vector(mode = 'list', length = length(unique(set)))     for(i in 1:length(unique(set))) {         d[[i]] <- diff(which(new.seq == set[i]))     }     d.flat <- unlist(x = d)     x <- range(d.flat)     x <- x[1]:x[2]     b.dist <- 0     for(i in x) {         b.dist <- b.dist + sqrt((sum(d.flat == i) / length(d.flat)) * dgeom(x = i -1,                   prob = 1/length(unique(set))))     }     b.dist <- -1 * log(x = b.dist)     return(b.dist) } dist.bhat <- replicate(n = 10000, expr = gen.bhat(set = letters, size = 1000)) ``

Now we may compute the probability of observing the Bhattacharyya Distance observed above, or one more extreme, if the ordered set was generated in such a way that each of its positions follows a uniform distribution over the letters.

``p <- ifelse(b.dist <= mean(dist.bhat), sum(dist.bhat <= b.dist) / length(dist.bhat),             sum(dist.bhat > b.dist) / length(dist.bhat)) ``

In this case, the probability turns out to be about \$0.38\$.

For completeness, the following image is a histogram of the simulated Bhattacharyya Distances. I think it's important to realize that you will never observe a Bhattacharyya Distance of \$0\$ because the ordered set has finite length. Above, the maximum distance between any two occurrences of a letter is at most \$999\$.

Rate this post