In the early days of computing, humans had to laboriously tell a machine exactly what to do and how to do it
Needing to give a computer a very specific set of instructions used to inherently limit what computers could do. Today, computers are much more capable thanks in part to artificial intelligence (AI). AI systems “learn” what to do by sorting through lots of previous examples, and they’re really good — and really fast — at finding fresh patterns or new insights in large amounts of data.
When the Google Brain team began their work in 2011, AI and machine learning had been making steady but slow progress for many years. The team set out to see if they could move this promising new technology out of the lab and into everyday products much faster.
Brain’s work was central in catapulting “deep learning” from the academic arena to the commercial prime time.
Neural networks and cat recognition
Artificial neural networks are essentially networks of computers loosely influenced by how the human brain processes information. Inspired by promising academic papers, the Brain team thought it was possible to architect and train a network that could produce learning so accurate and so general that it would open up new horizons for the real-world applications of machine learning.
The team created one of the largest-ever neural networks for machine learning by connecting 16,000 computer processors; they were eager to see what a network of this size and scale could do. To test its capabilities, the team fed random thumbnails of images of cats extracted from 10 million YouTube videos into the neural network, but they didn’t tell the machines in advance what a cat looks like. The results were far better than any previous machine learning efforts: the simulation taught itself to recognize cats! But that wasn’t all. Brain then generated a digital image of a cat by assembling all the features that, according to the millions of images it was fed, comprise a cat.
The project successfully demonstrated that software-based neural networks mirrored a key theory of how the human brain works: individual neurons are trained to detect significant objects. But more importantly, it proved that machine learning algorithms, if fed immense amounts of data, could improve the usefulness and capability of products we use every day. And indeed, in the years since this foundational work, an AI-driven evolution in computing has made it possible to embed more intelligence into previously fairly “dumb” machines to unlock new and exciting possibilities, like cars that can drive themselves.
Breaking new ground in artificial intelligence
After having space and time to develop and prove out the Google Brain technology at X, the Brain team decided they were ready to apply their insights across a range of products at Google, so they graduated from X back to Google in late 2012 as part of Google AI. Brain’s technology currently powers products as wide-ranging as Google Translate, Android’s speech recognition system, search in Google Photos, video recommendations in YouTube, and more. Brain also continues to break new ground in AI with experiments in fields like healthcare, cryptography, and robotics.