I've got probably a silly question about which, I must confess, I'm confused. Imagine repeated generating of uniformly distributed random orthogonal (orthonormal) matrix of some size $p$. Sometimes the generated matrix has determinant $1$ and sometimes it has determinant $-1$. (There are only two possible values. From the point of view of orthogonal rotation $det=-1$ means that there is also one additional reflection besides the rotation.)

We can change the sign of $det$ of an orthogonal matrix from minus to plus by changing the sign of any *one* (or, more generally, any odd number of) column of it.

My question is: given that we generate such random matrices repeatedly, will we **introduce some bias** in their uniform random nature if every time we choose to revert sign of only specific column (say, always 1st or always last)? Or should we *have* to pick the column randomly in order to keep the matrices represent random uniformly distributed collection?

**Contents**hide

#### Best Answer

The choice of column doesn't matter: the resulting distribution on the special orthogonal matrices, $SO(n)$, is still uniform.

I will explain this by using an argument that extends, in an obvious manner, to many related questions about uniform generation of elements of groups. Each step of this argument is trivial, requiring nothing more than reference to suitable definitions or a simple calculation (such as noting that the matrix $mathbb{I}_1$ is orthogonal and self-inverse).

**The argument is a generalization of a familiar situation.** Consider the task of drawing *positive* real numbers according to a specified continuous distribution $F$. This can be done by drawing *any* real number from a continuous distribution $G$ and negating the result, if necessary, to guarantee a positive value (almost surely). In order for this process to have the distribution $F$, $G$ must have the property that

$$G(x) – G(-x) = F(x).$$

The simplest way to accomplish this is when $G$ is symmetric around $0$ so that $G(x) – 1/2 = 1/2 – G(-x)$, entailing $F(x) = 2G(x) – 1$: all positive probability densities are simply doubled and all negative outcomes are eliminated. The familiar relationship between the half-normal distribution ($F$) and normal distribution ($G$) is of this kind.

In the following, the group $O(n)$ plays the role of the non-zero real numbers (considered as a *multiplicative* group) and its subgroup $SO(n)$ plays the role of the positive real numbers $mathbb{R}_{+}$. The Haar measure $dx/x$ is invariant under negation, so when it is "folded" from $mathbb{R}-{0}$ to $mathbb{R}_{+}$, the distribution of the positive values does not change. (This measure, unfortunately, cannot be normalized to a probability measure–but that is the only way in which the analogy breaks down.)

Negating a specific column of an orthogonal matrix (when its determinant is negative) is the analog of negating a negative real number to fold it into the positive subgroup. More generally, you could pick in advance *any* orthogonal matrix $mathbb{J}$ of negative determinant and use it instead of $mathbb{I}_1$: the results would be the same.

Although the question is phrased in terms of generating random variables, it really asks about probability distributions on the matrix groups $O(n, mathbb{R}) = O(n)$ and $SO(n, mathbb{R}) = SO(n)$. The connection between these groups is described in terms of the orthogonal matrix

$$mathbb{I}_1 = pmatrix{-1 & 0 & 0 & ldots & 0 \ 0 & 1 & 0 & ldots & 0 \ vdots & vdots & vdots & ldots & 0 \ 0 & 0 & 0 & ldots & 1}$$

because negating the first column of an orthogonal matrix $mathbb X$ means right-multiplying $mathbb{X}$ by $mathbb{I}_1$. Notice that $SO(n)subset O(n)$ and $O(n)$ is the disjoint union

$$O(n) = SO(n) cup SO(n),mathbb{I}_1^{-1}.$$

Given a probability space $(O(n), mathfrak{S}, mathbb{P})$ defined on $O(n)$, the process described in the question defines a map

$$f:O(n)to SO(n)$$

by setting

$$f(mathbb{X}) = mathbb{X}$$

when $mathbb{X}in SO(n)$ and

$$f(mathbb{X}) = mathbb{X}mathbb{I}_1$$

for $mathbb{X}in SO(n),{mathbb{I}_1}^{-1}$.

The question is concerned about generating random elements in $SO(n)$ by obtaining random elements $omegain O(n)$: that is, by "pushing them forward" via $f$ to produce $f_{*}omega = f(omega)in SO(n)$. The pushforward creates a probability space $(SO(n), mathfrak{S}^prime, mathbb{P}^prime)$ with

$$mathfrak{S}^prime = f_{*}mathfrak{S} = {f(E),|,Esubsetmathfrak{S}} $$

and

$$mathbb{P}^prime(E) = (f_{*}mathbb{P})(E) = mathbb{P}(f^{-1}(E)) = mathbb{P}(E cup E,mathbb{I}_1)$$

for all $E subset mathfrak{S}^prime$.

Assuming right multiplication by $mathbb{I}_1$ is measure-preserving, and noting that in any event $E cap E,mathbb{I}_1 = emptyset$, it would follow immediately that for all $Einmathfrak{S}^prime$,

$$mathbb{P}^prime(E) = mathbb{P}(Ecup E,mathbb{I}_1^{-1}) = mathbb{P}(E) + mathbb{P}(E,mathbb{I}_1^{-1}) = 2mathbb{P}(E).$$

In particular, when $mathbb{P}$ is invariant under right-multiplication in $O(n)$ (which is what "uniform" typically means), the obvious fact that $mathbb{I}_1$ and its inverse (which happens to equal $mathbb{I}_1$ itself) are both orthogonal means the foregoing holds, demonstrating that $mathbb{P}^prime$ is uniform, too. Thus **it is unnecessary to select a random column for negation.**

### Similar Posts:

- Solved – How to generate uniformly random orthogonal matrices of positive determinant
- Solved – How to show this matrix is positive semidefinite
- Solved – Equality of two random variables
- Solved – PCA on a rank-deficient matrix using SVD of the covariance matrix
- Solved – Use Importance Sampling and Monte carlo for estimating a summation