For the last several years, my team and I have been working to see if it’s possible to teach robots to perform useful tasks in the messy, unstructured spaces of our everyday lives. We imagine a world where robots work alongside us, making everyday tasks — like sorting trash, wiping tables in cafes, or tidying chairs in meeting rooms — easier. In a more distant future, we imagine our robots helping us in a myriad of ways, like enabling older people to maintain their independence for longer. We believe that robots have the potential to have a profoundly positive impact on society and can play a role in enabling us to live healthier and more sustainable lives. While our imagined world is still a long way off, results from our recent experiments suggest that we may just be on track to one day make this future a reality.
I previously shared progress from an experiment where we used reinforcement learning and simulation to teach robots how to sort waste to reduce unnecessary landfill. After showing that it is possible for the robots to improve through practice, we set ourselves the challenge of taking what the robots learned when performing one task and applying that learning to different tasks without rebuilding the robot or writing lots of code from scratch.
Today, I’m pleased to share that we have early signs that this is possible. We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices. The same robot that sorts trash can now be equipped with a squeegee to wipe tables and use the same gripper that grasps cups can learn to open doors.
Our prototype autonomously wipes down tables after lunch
Now that we’ve seen signs that creating a general-purpose learning robot is possible, we’ll be moving out of the rapid prototyping environment of X to focus on expanding our pilots to some of Google’s Bay Area campuses. We’ll also be dropping the “project” from our name and will now be known as Everyday Robots.
Back in the 1980s, roboticist and AI luminary Hans Moravec observed that while it’s easy to train computers to do things that humans find hard, like play chess or do advanced mathematics, training them to do the stuff humans find easy, like walking or recognising and interacting with objects around them, is incredibly challenging. Often summarised as “the hard things are easy and the easy things are hard” this adage remains true decades later. Recent breakthroughs in machine learning, however, are slowly helping change this.
Today, most robots still operate in environments specifically designed, structured and even illuminated for them. The tasks they complete are very specific, and the robots are painstakingly coded to perform those tasks in exactly the right way, at exactly the right time. Yet this approach simply won’t work in the messy complex spaces of our everyday lives. Imagine trying to script all the possible ways to pick up a cup of coffee, anticipate the lighting or open a door. It simply wouldn’t scale. We believe that for robots to be helpful in the unstructured and unpredictable spaces where we live and work, they can't be programmed: they have to learn.
Over the last few years, we’ve been building an integrated hardware and software system that is designed for learning — including transferring learning from the virtual world to the real world. Our robots are equipped with a mix of different cameras and sensors to take in the world around them. Using a combination of machine learning techniques like reinforcement learning, collaborative learning, and learning from demonstration, the robots have steadily gained a better understanding of the world around them and become more skilled at doing everyday tasks.
Our robots practice tasks like table wiping in the simulated world before practicing in the real world, reducing the time needed to learn new tasks
A robot sorts recycling, waste and compost. On screen you can see a visualiser that helps us to understand what the robot is seeing and doing
Back in 2016, when we weren’t using simulation and were using a small lab-configuration of industrial robots to learn how to grasp small objects like toys, keys and everyday household items, it took the equivalent of four months for one robot to learn how to perform a simple grasp with a 75% success rate. Today, a single robot learns how to perform a complex task such as opening doors with a 90% success rate with less than a day of real-world learning. Even more excitingly, we’ve shown that we can build on the algorithms and learnings from door opening and apply them to a new task: straightening up chairs in our cafes. This progress gives us hope that our moonshot for building general purpose learning robots might just be possible.
Our robot autonomously opens a latched door to a meeting room on Google’s Mountain View campus
We have been able to take the learning from door opening and apply it learn how to push in chairs
Over the coming months, Googlers who work in Mountain View may catch glimpses of our prototypes wiping tables after lunch in the cafes, or opening meeting room doors to check if the room needs to be tidied or if it’s missing chairs. Over time, we’ll be expanding the types of tasks they’re doing and the buildings where we operate and look forward to sharing updates from our journey over the coming months and years.
As I’ve shared before, building cool robot technology is not an end in itself. We hope to create robots that are as useful in our physical lives as computers have been in our digital lives and believe that robots hold enormous potential to be tools that will help us find new solutions to some of the biggest challenges facing the world – from finding new ways to live more sustainably, to caring for loved ones. But that is still far ahead of us. For now we’re focused on teaching the robots new tasks and making sure they don’t get stuck in the corridor on their way to help us out.