Notes on The Deep Learning Revolution
Table of Contents
Intelligence Reimagined (Where did this come from?)
Timeline
- 1956: Dartmouth Artificial Intelligence Summer Research Project - start of the field of Artificial Intelligence.
- 1962: Frank Rosenblatt publishes description of the Perceptron
- 1962: David Huble and Torsten Wiesel report first recordings of responses from neurons
- 1969: Marvin Minsky and Seymour Papert point out limits of perceptron, triggering the AI Winter
- 1979: Geoffrey Hinton and James Anderson organize Parallel Models of Associative Memory workshop to gather researchers working on neural networks
- 1987: First Neural Information Processing Systems (NIPS) conference held, bringing machine learning reasearchers together
The Rise of Machine Learning
What is deep learning?
Deep Learning is a form of machine learning that uses data to train artificial neural networks to do things. When the field of artificial intelligence began in the 1950s there were two camps - one that believed the path to intelligenc lay in using formal logic and writing computer programs, and one that believe intelligence would come by learning directly from data. Deep Learning belongs to the second camp, and although it has been around for a long time, only once we had enough computational power and data was it able to compete.
How did self-driving cars come about?
- In 2005 a group from Stanford lead by Sebastian Thrun won the DARPA Grand Challenge. This was the second Darpa challenge and the first where (five) vehicles were able to finish.
- Some see self-driving cars as a way to remake society:
- no need to own a car, use a just-in-time service
- No need for parking lots and so many lanes on the road
- Travel time can be productive
- Once one car learns something it can be taught to all the other cars so 'rare' events will be handled even if it is the first time a car sees the event.
How do machines translate languages?
Originally they worked using a statistical approach, looking for familiar word combinations and counts. Now they are able to keep longer sections of text which improves the translation because there is more seen in contetxt. The hope is that when they can be expanded to learn paragraphs or an author's entire body of work, then they can learn more subtleties and the poetry of the text.
What's the big deal about speech recognition?
Some people think that the next interface to our machines will be the human voice. There have already been demonstrations of live translations made using computer speech recognition and translation.
How good is machine learning at playing poker?
DeepStack played poker against professional poker players and beat all of them. This is important because the nature of the game means that every player is working with imperfect information (the unseen cards and the other players' cards). This could imply that machine learning could be used in other places where you don't have all the information, like politics and negotiations.
Does artificial intelligence pose a threat to humanity?
If you look an the areas where deep learning managed to outdo human competitors (e.g. Alpha Go), what eventually happened was that the human players were able to learn moves from the Artificial Intelligence that they would likely not have come up with themselves. This points the way to the immediate future of Artificial Intelligence. Although AI can sometimes outperform humans, the more open-ended the problem, the more it is likely that humans and machines can complement each other, with the machine creating outcomes we could never think of and the humans contributing the expertise needed as a human to solve human problems. AI is, so far, more of a complement to human intelligence, not a replacement for it.