Anthropic’s Claude Takes Control of a Robot Dog

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

As more robots start showing up in warehouses, offices, and even people’s homes, the idea of large language models hacking into complex systems sounds like the stuff of sci-fi nightmares. So, naturally, Anthropic researchers were eager to see what would happen if Claude tried taking control of a robot—in this case, a robot dog.

In a new study, Anthropic researchers found that Claude was able to automate much of the work involved in programming a robot and getting it to do physical tasks. On one level, their findings show the agentic coding abilities of modern AI models. On another, they hint at how these systems may start to extend into the physical realm as models master more aspects of coding and get better at interacting with software—and physical objects as well.

“We have the suspicion that the next step for AI models is to start reaching out into the world and affecting the world more broadly,” Logan Graham, a member of Anthropic’s red team, which studies models for potential risks, tells WIRED. “This will really require models to interface more with robots.”

Courtesy of Anthropic

Courtesy of Anthropic

Anthropic was founded in 2021 by former OpenAI staffers who believed that AI might become problematic—even dangerous—as it advances. Today’s models are not smart enough to take full control of a robot, Graham says, but future models might be. He says that studying how people leverage LLMs to program robots could help the industry prepare for the idea of “models eventually self-embodying,” referring to the idea that AI may someday operate physical systems.

It is still unclear why an AI model would decide to take control of a robot—let alone do something malevolent with it. But speculating about the worst-case scenario is part of Anthropic’s brand, and it helps position the company as a key player in the responsible AI movement.

In the experiment, dubbed Project Fetch, Anthropic asked two groups of researchers without previous robotics experience to take control of a robot dog, the Unitree Go2 quadruped, and program it to do specific activities. The teams were given access to a controller, then asked to complete increasingly complex tasks. One group was using Claude’s coding model—the other was writing code without AI assistance. The group using Claude was able to complete some—though not all—tasks faster than the human-only programming group. For example, it was able to get the robot to walk around and find a beach ball, something that the human-only group could not figure out.

Anthropic also studied the collaboration dynamics in both teams by recording and analyzing their interactions. They found that the group without access to Claude exhibited more negative sentiments and confusion. This might be because Claude made it quicker to connect to the robot and coded an easier-to-use interface.

Courtesy of Anthropic

The Go2 robot used in Anthropic’s experiments costs $16,900—relatively cheap, by robot standards. It is typically deployed in industries like construction and manufacturing to perform remote inspections and security patrols. The robot is able to walk autonomously but generally relies on high-level software commands or a person operating a controller. Go2 is made by Unitree, which is based in Hangzhou, China. Its AI systems are currently the most popular on the market, according to a recent report by SemiAnalysis.

The large language models that power ChatGPT and other clever chatbots typically generate text or images in response to a prompt. More recently, these systems have become adept at generating code and operating software—turning them into agents rather than just text-generators.

source

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Recent News

Editor's Pick