Artificial-intelligence researchers are focusing on a method called deep learning, which gets computers to recognize patterns in data on their own (see “Teaching Machines to Understand Us”). One person who demonstrated its potential is Ilya Sutskever, who trained under a deep-learning pioneer at the University of Toronto and used the technique to win an image-recognition challenge in 2012. He is now a key member of the Google Brain research team. I asked him why deep learning could mimic human vision and solve many other challenges.
“When you look at something, you know what it is in a fraction of a second,” he says. “And yet our neurons operate extremely slowly. That means your brain must only need a modest number of parallel computations. An artificial neural network is nothing but a sequence of very parallel, simple computations.
“We started a company to keep applying this approach to different problems and expand its range of capabilities. Soon, we joined Google. I’ve shown that the same philosophy that worked for image recognition can also achieve really good results for translation between languages. It should beat existing translation technology by a good margin. I think you will see deep learning make a lot of progress in many areas. It doesn’t make any assumptions about the nature of problems, so it is applicable to many things.”
—Tom Simonite