I've read that **Mahalanobis distance** is as effective as the **Euclidean distance** when **comparing** 2 **projected feature vectors** in classification using a **LDA classifier**.

I was wondering if this statement were true?

It would be nice if someone could comment on this.

I'm working on **projecting** a **36 dimensional feature vector** to a **1d feature vector** using a **2 class LDA classifier** and comparing projected feature vectors.

**Contents**hide

#### Best Answer

Given that the covariance matrix S = I, the identity matrix, the Mahalanobis distance is equal to the normalised euclidean distance – which is a scale invariant Euclidean distance.