On a cool morning this summer, I visited a former shopping mall in Mountain View, California, that is now a Google office building. On my way inside, I passed a small museum of the company’s past “moonshots,” including Waymo’s first self-driving cars. Upstairs, Jonathan Tompson and Danny Driess, research scientists in Google DeepMind’s robotics division, stood in the center of what looked like a factory floor, with wires everywhere.
At a couple of dozen stations, operators leaned over tabletops, engaged in various kinds of handicraft. They were not using their own hands—instead, they were puppeteering pairs of metallic robotic arms. The setup, known as ALOHA, “a low-cost open-source hardware system for bimanual teleoperation,” was once Zhao’s Ph.D. project at Stanford. At the end of each arm was a claw that rotated on a wrist joint; it moved like the head of a velociraptor, with a slightly stiff grace. One woman was using her robotic arms to carefully lower a necklace into the open drawer of a jewelry case. Behind her, another woman prized apart the seal on a ziplock bag, and nearby a young man swooped his hands forward as his robotic arms folded a child’s shirt. It was close, careful work, and the room was quiet except for the wheeze of mechanical joints opening and closing. “It’s quite surprising what you can and can’t do with parallel jaw grippers,” Tompson said, as he offered me a seat at an empty station. “I’ll show you how to get started.”
Leave a reply