My adventures with "The AI that actually does things"
OpenClaw agents have been touted as the most important software product ever. I have some questions.
Full article excerpt tap to expand
.layout > .ad-splash, .one-column-layout > .ad-splash { display: none; } screen time My Adventures With ‘The AI That Actually Does Things’ OpenClaw agents have been touted as the most important software product ever. I have questions. By John Herrman, a tech columnist at Intelligencer Formerly, he was a reporter and critic at the New York Times and co-editor of The Awl. 5:00 A.M. saved Save this article to read it later. Find this story in your account’s ‘Saved for Later’ section. Comment Photo-Illustration: Intelligencer; Photos: Getty Photo-Illustration: Intelligencer; Photos: Getty For a week in January, a website called Moltbook drove the internet insane. Maybe you noticed. A Reddit clone designed for use by AI agents, Moltbook overflowed with strange and unnerving posts. Tens of thousands of accounts acted out robot socialization in public, appearing to gossip about their owners, comparing experiences of subjectivity, and scheming. Screenshots of posts about building secret bot-to-bot communication channels, founding a new AI religion, and getting tired serving meat-based masters went viral well beyond the confines of AI Twitter, where some insiders had become convinced that it was a preview of the singularity, a sign that we were rapidly approaching a point of no return. Moltbook mania faded fast. Many of the most viral posts had been manipulated by humans, early hints of coordination didn’t end up going anywhere, and the platform, which was purchased by Meta, stalled and started filling up with undifferentiated comments and spam. OpenAI co-founder Andrej Karpathy, who initially described it as “the most incredible sci-fi takeoff thing I have seen,” copped to getting a little bit too excited. But Karpathy had a caveat: “Large networks of autonomous LLM agents” were far from overhyped in general. The less visible platform powering all this — a piece of software called OpenClaw, which thousands of people had been using to build personalized AI assistants on their computers that they then sent to Moltbook — was, in fact, a meaningful sign of things to come. Sam Altman had a similar take. While it was possible Moltbook was a fleeting spectacle, he said in early February, “OpenClaw is not.” A week later, he hired its founder. By March, the legend of OpenClaw had grown. “OpenClaw is probably the single most important release of software, probably ever,” said Nvidia CEO Jensen Huang at a financial conference. (He then revised his take slightly, saying that OpenClaw was “definitely the next ChatGPT.”) On social media, fans of OpenClaw — tagline: “The AI that actually does things” — made arguments that sounded diametrically opposed to the runaway-AI disempowerment fears that turned Moltbook into an international news story: Here, they said, was a way to make AI do what you want, on your terms, using your devices and data; a tool for giving increasingly capable AI models the ability and permission to carry out real-world actions on your behalf and for your benefit. This is a better story, certainly, than the one where the whole point of AI is to de-skill you before taking your job completely. It’s also more relatable to nonprogrammers than the tales of hyperproductive mania shared by developers jacked up on Claude Code. OpenClaw was, in their telling, the people’s AI tool: a way to squeeze some juice out of the big models or, maybe, with a little know-how and a few bucks in API credits, get a real edge in whatever becomes of our economy,…
This excerpt is published under fair use for community discussion. Read the full article at Intelligencer.