I have what I think might be a standard machine learning problem but I can't find a clear solution.
I have lots vectors of different dimensions. For each pair of vectors I can compute their similarity. I would like to embed these vectors into Euclidean space of some fixed dimension $d$ in such a way that vectors that are similar under the original measure have small Euclidean distance under the embedding.
This seems to be called similarity metric learning but the wiki doesn't seem to go into any detail about ways to tackle it. I also found some code for metric learning in Python but it seems to tackle a different problem to the one I am interested in.
How can you tackle this similarity learning problem?
These are known as multidimensional scaling algorithms. From wikipedia (https://en.wikipedia.org/wiki/Multidimensional_scaling), "An MDS algorithm aims to place each object in N-dimensional space such that the between-object distances are preserved as well as possible." So essentially you input a distance matrix and the algorithms output a Euclidean representation that should approximate the distances. In your case, you have similarity scores, so you'll need to take either the reciprocal (distance = 1 / similarity) or subtract similarity from a large constant (distance = c – similarity).