Solved – How true is this slide on deep learning claiming that all improvements from 1980s are only due to much more data and much faster computers

I was listening to a talk and saw this slide:

enter image description here

How true is it?

I was browsing the AI StackExchange and ran across a very similar question: What distinguishes “Deep Learning” from other neural networks?

Since the AI StackExchange will close tomorrow (again), I'll copy the two top answers here (user contributions licensed under cc by-sa 3.0 with attribution required):


Author: mommi84less

Two well-cited 2006 papers brought the research interest back to deep learning. In "A fast learning algorithm for deep belief nets", the authors define a deep belief net as:

[…] densely-connected belief nets that have many hidden layers.

We find almost the same description for deep networks in "Greedy Layer-Wise Training of Deep Networks":

Deep multi-layer neural networks have many levels of non-linearities […]

Then, in the survey paper "Representation Learning: A Review and New Perspectives", deep learning is used to encompass all techniques (see also this talk) and is defined as:

[…] constructing multiple levels of representation or learning a hierarchy of features.

The adjective "deep" was thus used by the authors above to highlight the use of multiple non-linear hidden layers.


Author: lejlot

Just to add to @mommi84 answer.

Deep learning is not limited to neural networks. This is more broad concept than just Hinton's DBNs etc. Deep learning is about the

constructing multiple levels of representation or learning a hierarchy of features.

So it is a name for hierarchical representation learning algorithms. There are deep models based on Hidden Markov Models, Conditional Random Fields, Support Vector Machines etc. The only common thing is, that instead of (popular in '90s) feature engineering, where researchers were trying to create set of features, which is the best for solving some classification problem – these machines can work out their own representation from raw data. In particular – applied to image recognition (raw images) they produce multi level representation consisting of pixels, then lines, then face features (if we are working with faces) like noses, eyes, and finally – generalized faces. If applied to Natural Language Processing – they construct language model, which connects words into chunks, chunks into sentences etc.


Another interesting slide:

enter image description here

source

Similar Posts:

Rate this post

Leave a Comment