scientists have developed an online platform where robots can learn new skills from each other worldwide — a kind of “Wikipedia for robots.” The objective is to help develop robots better at helping elders with caring and household tasks.
2024-01-22: Large-scale collaboration to share training data
The scale of this project is very large because it has to be. The RT-X dataset currently contains 1m robotic trials for 22 types of robots, including many of the most commonly used robotic arms on the market. The robots in this dataset perform a huge range of behaviors, including picking and placing objects, assembly, and specialized tasks like cable routing. There are 500 different skills and interactions with 1000s of different objects. It’s the largest open-source dataset of real robotic actions in existence.
To test the capabilities of our model, 5 of the laboratories involved in the RT-X collaboration each tested it in a head-to-head comparison against the best control system they had developed independently for their own robot. Each lab’s test involved the tasks it was using for its own research, which included things like picking up and moving objects, opening doors, and routing cables through clips. Remarkably, the single unified model provided improved performance over each laboratory’s own best method, succeeding at the tasks 50% more often on average.
Our early results hint at how large cross-embodiment robotics models could transform the field. Much as large language models have mastered a wide range of language-based tasks, in the future we might use the same foundation model as the basis for many real-world robotic tasks. Perhaps new robotic skills could be enabled by fine-tuning or even prompting a pretrained foundation model. In a similar way to how you can prompt ChatGPT to tell a story without first training it on that particular story, you could ask a robot to write “Happy Birthday” on a cake without having to tell it how to use a piping bag or what handwritten text looks like. Of course, much more research is needed for these models to take on that kind of general capability, as our experiments have focused on single arms with 2-finger grippers doing simple manipulation tasks.
The friendly robot that carries your stuff. we live in a time where you can’t be sure if someone is trolling you with a particular kickstarter, or whether you’re dealing with the real thing.
“We expect the robots will demonstrate the competence of a 2-year-old child, giving them the ability to autonomously carry out simple commands such as ‘Clear the debris in front of you’ or ‘Close the valve.’ The robots will still need to be told by human operators which tasks to chain together to achieve larger goals, but DARPA’s hope is that this demonstration will show the promise disaster response robots hold for mitigating the effects of future disasters.”
now with more cuddly rescue capability instead of ominous “military applications”
humans prefer working with robots which sometimes make mistakes. Robots’ gazes must also be carefully programmed lest a stare make someone uncomfortable.
Pfc. Marcus Beedle looks over his shoulder at the robot following him. The machine’s 4 legs are eagerly stamping the grass, its sensor-laden head held high. “LS3, follow tight,” Beedle says to the robot, and the Legged Squad Support System—which stands taller than a dog but smaller than a mule—follows in the exact footsteps of its Marine Corps handler. Beedle’s backpack is outfitted with thick black bands. To follow him, the robot senses this pattern via the flickering laser in its head. LS3 also uses stereoscopic cameras to fix on the Marine’s location and can trace the path he’s taken by following a navigation device strapped to Beedle’s right shoe. As the young private first class strides forward, the LS3 obediently trots after him, exhaust from its gas engine sputtering. “Follow-the-leader is our bread and butter”.
2013-12-14: This is a far more interesting take than all the terminator jokes which are neither insightful, original, or clever.
News broke today that Google acquired my friend Marc Raibert’s company, Boston Dynamics, one of the coolest robotics companies in the world. You know Boston Dynamics because of their work building “Big Dog,” “Bigger Dog,” Cheetah and now Atlas. They’ve been the most impressive functional robots around.
What’s bigger news is that this is their 8th announced robotics acquisition in the last 6 months. Remember that Google is spending over $7B every year in R&D, M&A.
This internal robotics revolution is being led by Andy Rubin, the Google executive who developed and ran Android, the world’s most widely used smartphone software. This is being done with Larry Page’s enthusiastic support, as he and his team continue to display their impressive “moonshot thinking” by investing heavily in the future.
Don’t forget that Google is probably the No. 1 hotbed of research on artificial intelligence with the acquisition of my friend and SU Co-Founder Ray Kurzweil and the recent addition of Deep Learning creator Geoffrey Hinton.
So what do you get when you combine 8 robotics companies, the leading AI creative forces and researchers, the brilliance (and ambition) of Larry Page and a $7B R&D budget?
I think it will be the transformation of our society — how we work, how we learn, take care of our sick, conduct our commerce, explore, handle disasters, fight wars… everything.
If this level of transformation isn’t on your radar, if you are not thinking about how this will change your life, your business and your industry, then you are missing it, big time.
You need to understand the implications of this, and figure out how you are going to surf on top of this tsunami… not be crushed by it.
If you like, on January 9th and 10th, 2014, at the Ritz-Carlton hotel (Marina Del Rey, Los Angeles), I’ll be teaching entrepreneurs about robotics, AI and other exponential technologies, how you can think bold and take action globally, and how to leverage such powerful tools as crowd funding and incentive competitions.
And here are some Boston Dynamics videos, so you can see the robots in action:
Big Dog:
Cheetah:
Atlas:
2016-03-18: Boston Dynamics are an evolutionary dead end
Boston Dynamics are very successful pioneers. But their algorithms are not based on Deep Learning principles. And Google is leading the world in Deep Learning and can apply it to anything they want, including robotics. DL based algorithms do not provide a complete robotics solution today but there is wide agreement that this is the best path forward for the field. Why is this important? The difference is robots that can walk vs robots that can dance ballet. The goal is “graceful locomotion” which will be an order of magnitude more adaptive, more energy efficient, and faster than the current generation of robots.
Kinema’s software—which is robot-agnostic, meaning it already works on a range of robots beyond Handle—helps the machine through all these challenges. “Their system is able to look at a stack of boxes, and no matter how ordered or disordered the boxes are, or the markings on top, or the lighting conditions, they’re able to figure out which boxes are discrete from each other and to plan a path for grabbing the box.” That’s a huge part of what Handle, a robot designed to work in warehouses, needs to do.
2023-05-16: Some much needed competition. Boston Dynamics is moving at a snails pace.