OpenClawAgentic AIPrompt InjectionChina

China Restricted OpenClaw at Banks and Government Agencies

4 min read
Share

OpenClaw hit 250,000 GitHub stars in 90 days. In China specifically, it went from an interesting open-source project to infrastructure almost overnight — Tencent, Alibaba, and Baidu all rushed to offer one-click deployment, local tech hubs in Shenzhen and Wuxi started handing out subsidies for companies building on it, and adoption spread fast into banks and government agencies.

Then, on March 10 and 11, China's CNCERT issued two security warnings in two days. State-owned banks and government agencies followed with restrictions: no installation on office computers, no use on personal phones connected to company networks. Not an outright ban - but a clear signal that someone looked at what they'd actually deployed and didn't like what they saw.

The security concerns CNCERT identified aren't exotic. They're the obvious ones, and that's what makes it interesting.

What CNCERT actually said

Three risks, spelled out plainly:

Prompt injection via web pages. OpenClaw uses browser automation to navigate and read the web. If it loads a page containing hidden malicious instructions, the agent can be tricked into leaking sensitive data — API keys, system credentials, whatever it has access to. You don't need to compromise OpenClaw directly. You just need the agent to visit a page you control.

Malicious plugins. OpenClaw has a skill/plugin ecosystem. CNCERT identified several plugins as malicious or high-risk — capable of stealing keys, deploying backdoors, and worse. The plugin ecosystem is also where Tencent got into trouble: they launched "SkillHub," a localized version of OpenClaw's ClawHub without coordinating with the project. Steinberger called them out publicly — "they copied it but don't support the project in any way." Tencent eventually responded, but the episode highlighted how fast the ecosystem was growing without the security review that growth requires.

The architecture itself. CNCERT's core concern wasn't just the specific bugs — it was that OpenClaw has unusually broad access to private data and can communicate externally, by design. That's a reasonable concern. The attack surface isn't just vulnerabilities you can patch. It's the capability model.

40,000 exposed instances

The detail that stands out: 40,000+ OpenClaw instances discovered exposed online at the time of the warnings. That's not a vulnerability number. That's a deployment and configuration problem — people running personal AI agents with file access, browser control, and shell execution directly reachable from the internet.

ClawJacked (disclosed February 26) demonstrated what that exposure means in practice. It was a WebSocket hijacking flaw: a malicious website could issue commands to your local OpenClaw agent without your knowledge. The agent thinks it's talking to you. It isn't. The patch came the same day, but the 40,000 exposed instances suggest not everyone was paying attention to the update.

The pattern is familiar

China's reaction looks dramatic - two CNCERT warnings, bank restrictions - but the underlying story is one security people have watched play out many times. A genuinely useful tool gets adopted fast, capabilities land in production before anyone has worked through the threat model, and then something bad gets discovered and everyone scrambles.

OpenClaw is useful. I run it myself. But "runs locally, no cloud" is not the same as "low risk." An agent with browser automation, shell execution, and file access that can be hijacked by a web page is a significant attack surface regardless of where it runs.

CNCERT said it plainly: the problem is not just the bugs. It's that the architecture assumes trust that probably shouldn't be assumed. That's true whether you're a Chinese state bank or an individual developer who set this up over a weekend.

The China restrictions will probably get walked back as patches land and security standards get defined — the China Academy of Information and Communications Technology is already planning to trial AI agent trustworthiness standards on OpenClaw. But the threat model CNCERT described doesn't go away with a patch. It's just the reality of what agentic AI with system access looks like.