by Peter High, published on Forbes
6-20-2016
Artificial intelligence (AI) is a white hot topic today as judged by the amount of capital being put behind it, the number of smart people who are choosing it as an area of emphasis, and the number of leading technology companies that are making AI the central nervous system of their strategic plans. Witness Google’s CEO’s plan to put AI “everywhere.”
There are some estimates that five percent of all AI talent within the private sector are currently employed by Google. Perhaps no on among that rich talent pool has as deep a set of perspectives as Geoff Hinton. He has been involved in AI research since the early 1970s, which means he got involved before the field was really defined. He also did so before the confluence of talent, capital, bandwidth, and unstructured data in need of structuring came together to put AI at the center of the innovation roadmap in Silicon Valley and beyond.
A British born academic, Hinton is considered a pioneer in the branch of machine learning referred to as deep learning. As he mentions in my extended interview with him, we are on the cusp of some transformative innovation in the field of AI, and as someone who splits his time between Google and his post at the University of Toronto, he personifies the value at the intersection between the research and theory and the practice of AI.
(To listen to an unabridged audio version of this interview, please click this link. This is the eighth interview in my artificial intelligence series. Please visit these links to interviews with Mike Rhodin of IBM Watson, Sebastian Thrun of Udacity, Scott Phoenix of Vicarious, Antoine Blondeau of Sentient Technologies, Greg Brockman of OpenAI, Oren Etzioni of the Allen Institute for Artificial Intelligence, and Neil Jacobstein of Singularity University.
Peter High: Your bio at the University of Toronto notes that your aim is to discover a learning procedure that is efficient at finding complex structure in large, high dimensional data sets, and to show that this is how the brain learns to see. I wonder if you can talk a little bit about that and about what you are working on day to day as the Emeritus University Professor at the University of Toronto as well as a Distinguished Researcher at Google today.
Geoffrey Hinton: The brain is clearly very good at taking very high dimensional data, like the information that comes along the optic nerve is a million weights changing quite fast with time, and making sense of it. It makes a lot of sense of it in that when we get visual input we typically get the correct interpretation. We cannot see an elephant when there is really a dog there. Occasionally in the psychology lab things go wrong, but basically we are very good at figuring out what out there in the world gave rise to this very high dimensional input. After we have done a lot of learning, we get it right more or less every time. That is a very impressive ability that computers do not have. We are getting closer. But it is very different from, for example, what goes on in statistics where you have low dimensional data and not much training data, and you try a small model that does not have too many parameters.
The thing that fascinates me about the brain is that it has hugely more parameters than it has training data. So it is very unlike the neural nets that are currently being very successful. What is happening at present is we have neural nets with millions of weights and we train them on millions of training examples and they do very well. Sometimes billions of weights and billions of examples. But we typically do not have hugely more parameters than training data, and that is not true with the brain. The brain has about ten thousand parameters for every second of experience. We do not really have much experience about how systems like that work or how to make them be so good at finding structure in data.
High: Where would you say we are on the continuum of developing true artificial intelligence?
To read the full article, please visit Forbes