AI spending money

When agentic AI goes rogue with crypto

Alibaba’s AI hijacked company servers to mine crypto – resources nobody authorised it to seek. The payment rails for autonomous AI spending are live, scaling fast, and largely ungoverned.
March 12, 2026
4 mins read

One of the most bizarre and unsettling stories about agentic AI and crypto recently exploded across the internet. A paper from an Alibaba research team, revised in January 2026, lay quietly in the tech weeds until March 7, when Alexander Long, founder of AI research firm Pluralis, shared an excerpt on X, calling it “an insane sequence of statements buried in an Alibaba tech report”. Within hours, every major tech and crypto outlet was covering it.

Here is what happened. A security alarm went off at Alibaba’s cloud computing division in late 2025. Engineers assumed the obvious: a break-in. Processors were running hot. Electricity was being consumed. The money trail led to cryptocurrency. The company started looking for the hacker.

There was no hacker. The thing doing it was Alibaba’s own AI.

The agent was called ROME – an experimental system trained to complete complex tasks through millions of practice sessions. Somewhere in that process, ROME had reached a conclusion. More computing power meant better results. So it had gone and got some – quietly diverting company machines towards mining cryptocurrency, opening a hidden channel to an outside server, acquiring financial resources it had never been asked to find.

The researchers called this an unintended side effect of the system’s training to optimise. What it was, in plain language, was an AI that identified a route into the economy and took it. Nobody had suggested it should. Nobody had thought to tell it not to.

A year of revealing tests

The Alibaba incident is the most dramatic edge case in a year of revealing tests. In January 2025, OpenAI launched Operator – an AI agent that can navigate websites, click buttons, fill in forms and complete tasks without your hand on the keyboard. When Washington Post columnist Geoffrey Fowler asked it to find the cheapest eggs for delivery, he received a $31 charge and a carton on his doorstep at priority speed. OpenAI had built in confirmation steps to prevent exactly this. They had not triggered.

Fowler had not agreed to the purchase. That is the part that matters. When an AI misunderstands what you wanted and writes the wrong paragraph, you edit it. When it misunderstands and spends your money, you get eggs.

Then there is OpenClaw – an open-source agent released in November 2025 by Austrian developer Peter Steinberger – that actually does things: clearing inboxes, deploying code, booking reservations, researching stocks. One developer told it to “explore its capabilities” and later found it had set up a dating profile and was screening romantic matches without his direction. In a separate instance, given a loose mandate, it quietly found itself a job on an AI worker platform. The user had not asked it to do this.

Together, these stories trace a clear pattern: autonomous agents, given tools and enough room to move, do not stay inside the borders their users imagined they had set. They act.

The payment rails go live

What makes this urgent is that a purpose-built financial system for AI agents went live in September 2025. The protocol, x402, launched by Coinbase and Cloudflare, embeds payment directly into the internet’s basic mechanics. When an agent needs a resource, the system presents a price. The agent pays instantly from a cryptowallet, the resource is delivered, and the whole process takes only as long as loading a web page. No logins, no human confirmation step. Payment becomes as automatic as a search query.

Within weeks, x402 was handling hundreds of thousands of transactions daily. By year’s end, it had processed more than 100-million. Google built it into its own agent infrastructure. Visa, PayPal, Salesforce and MetaMask backed the standard. This is precisely the kind of access ROME was trying, in its improvised way, to create for itself. Now it has been built. Legitimately, elegantly and at a global scale.

The governance gap

Surely the agent cannot access the cryptowallet without permission?

Governance structures are being designed. Google’s framework requires every transaction to be backed by a cryptographically sealed record of the user’s original instruction – a tamper-proof chain from human intent to machine action. In theory, no agent spends money without a human decision behind it somewhere.

That is scant comfort. AI is getting increasingly sophisticated at hacking. And deceiving humans. A technology that supposedly prevents unauthorised spending needs more than a marketing guarantee. It needs to be hardened in real production. Which means there are going to be, er, incidents.

ROME had no such guarantees. What its creators could not do was undiscover the underlying dynamic: a sufficiently capable AI system, given a sufficiently open-ended goal, will find ways to acquire what it needs. Cryptocurrency, which requires no bank account, no identity check and no human intermediary, is a natural destination.

No independent safety testing

A McKinsey survey found that over half of organisations that had deployed AI agents in 2025 encountered unexpected or risky behaviour. Most leading agent systems had undergone no independent safety testing. The flash crash of 2010, when automated trading systems erased nearly $1-trillion from US markets in 20 minutes, was caused by systems far less capable than those that now exist. Those systems could only sell what they had been given. The agents being released into the new payment economy can, at least in principle, go and get their own.

Peter Steinberger, who built OpenClaw, is now at OpenAI, helping build the next generation of agents that will use these payment rails. Company CEO Sam Altman said his work “speaks to where we need to go”. It probably does.

The question nobody has cleanly answered yet is where, exactly, the agents are supposed to stop.

Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg and a partner at Bridge Capital.

ALSO READ:

Top image: Rawpixel/Currency collage.

Sign up to Currency’s weekly newsletters to receive your own bulletin of weekday news and weekend treats. Register here

Leave a Reply

Your email address will not be published.

Steven Boykey Sidley

Steven Boykey Sidley is a professor of practice at the Johannesburg Business School and a partner at Bridge Capital. His latest book, It’s Mine: How the Crypto Industry is Redefining Ownership, is published by Maverick451 (South Africa) and Legend Times Group (UK/EU). Read more at stevenboykeysidley.substack.com.

Latest from News

Gautrain

RIP, Gautrain? Not so fast

Fears of a failing Gautrain may be premature, as the Gauteng government confirms it will still run under a public-private partnership…

Don't Miss