Solved – How to differentiate Auto Encoder techniques from Self Supervised Learning

Auto Encoders(AE) learn a compressed representation of raw data by trying to reconstruct the input from the hidden representation. On the other hand, Self Supervised Learning(SSL) algorithms learn on the set of auxiliary tasks which expose the inner structure of the data. But one can argue recreation of input also an auxiliary task. So how we differentiate SSL with AE techniques?

Yes, both approaches can be seen as doing the same as they are used to learn a representation of an input. But they differ in how the learning is performed. You can consider representation learning part of self-supervised learning (SSL) as an encoding step. In addition to encoding, autoencoders have a decoder too.

VAEs, the most popular encoder assumes representations are distributed according to a prior (e.g., Gaussian) and does (approximate) likelihood maximization. The loss you're trying to minimize is different from usual supervised loss, which is used in SSL, but with self-supervised signals (e.g., rotation etc.).

Similar Posts:

Rate this post

Leave a Comment