marimo on kev and the ai supply chain has arrived
on april 23 cisa added cve-2026-39987 to the known exploited vulnerabilities catalog. the bug lives in marimo, the python notebook platform that a lot of data science teams have quietly standardized on for interactive analysis. the flaw is a pre-auth remote code execution via a websocket terminal endpoint at /terminal/ws that skips authentication entirely. unauthenticated attacker connects, gets a full pty shell as the user marimo runs under, which in the default docker image is root.
cvss is 9.3. federal civilian deadline is may 14. but if you are still sitting on this two weeks from now, you are going to have a bigger problem than a deadline.
the bug is not the story
pre-auth rce in a python notebook platform is bad. what makes this an early-warning story rather than a routine kev add is everything that happened around the bug.
sysdig's threat research team published the follow-on analysis on april 22 and the sequence of events is where the lesson is. the advisory dropped on april 8. sysdig's honeypot infrastructure caught the first successful exploitation attempt 9 hours and 41 minutes later. by the time they had enough telemetry to publish, they had logged 662 discrete exploit events from 11 unique source ips across 10 countries. full credential-theft operations — sshd keys, cloud provider api keys, llm provider tokens, database strings — completed on compromised hosts in under three minutes.
three minutes. that is how long you have on a vulnerable marimo host from the moment exploitation starts to the moment your cloud credentials are on the attacker's side of the network.
the payload is a template, not a bug
the interesting part of the sysdig report is the malware and the delivery chain, not the exploitation itself. the payload is a previously undocumented variant of nkabuse, a go-based backdoor that uses the nkn blockchain as its command-and-control substrate. nkn stands for new kind of network — a decentralized, peer-to-peer protocol where messages are relayed by chain-participant nodes. the practical consequence: there is no ip address to block and no domain to sinkhole. the c2 traffic blends in with legitimate nkn relay activity.
the backdoor also handles webrtc and stun to traverse nat cleanly and supports proxying, which means compromised marimo hosts can be pivoted into the broader network without new tooling.
the delivery chain is the other half. the attackers set up a hugging face space called vsccode-modetx — an intentional typosquat for vs code — and used it to host a dropper script and a binary named kagent. after the marimo rce, the exploit shell fires curl against hugging face to pull the payload. that's the full staging infrastructure. no dedicated attacker server, no sketchy-looking domain in the detection surface. just a github-alike content host with a reputation strong enough that most network egress rules allow it by default.
this is the ai supply chain attack template
pair this with the jfrog writeup published the same week. jfrog documented a rogue npm package called js-logger-pack that ships a postinstall downloader fetching a binary called microsoftsystem64 from the hugging face repo lordplay/system-releases. once installed, the implant exfiltrates stolen data into private hugging face datasets rather than to a purpose-built c2.
two independent 2026 campaigns, two different initial-access vectors, one common denominator: hugging face as both malware cdn and data-exfil storage. same platform. same attacker-trust assumption.
the pattern is now clear. 2026's ai supply chain attack template is four steps. one, find an ai-developer tool that deploys with relaxed defaults — default docker, root user, internet-facing, environment full of the keys that actually matter. two, exploit quickly, because the community around these tools does not yet patch at mainstream-enterprise speed. three, stage from a content host that enterprise egress rules trust — hugging face, but also github, pypi mirrors, npm registries. four, exfiltrate or persist over infrastructure that is hard to sinkhole — decentralized c2, private datasets on shared platforms, legitimate-looking cloud storage.
marimo is the case study. it will not be the last one.
what to actually do
patch marimo to 1.12.1 or later. today.
if you cannot patch today, pull any marimo instance off the public internet. put it behind a vpn or a zero-trust proxy. the days of running a python notebook with no auth on a public ip are over, and frankly should have been over years ago, but here we are.
rotate credentials. every api key, every service account token, every cloud credential that ever lived inside a running marimo process should be considered compromised until proven otherwise. that includes the obvious ones — aws, gcp, azure — and the less obvious ones: openai and anthropic api keys, database connection strings, internal service tokens. if your marimo hosts had access to production data, the secrets in memory are already gone.
audit hugging face egress. add network detection for unexpected outbound connections to huggingface.co from production workloads, ci runners, or any machine that is not actively pulling models it knows it needs. block outbound by default, allowlist per-repo. the same discipline applies to npm, github releases, container registries.
change the default deployment posture for ai-developer tools across your org. marimo is not unique. any internal platform that asks you to pull a docker image and expose a port is a candidate for the same class of attack. run ai notebooks and agent frameworks as non-root, network-isolated, with no standing credentials in process environment. use short-lived tokens. treat the notebook like an untrusted user, because it is.
the broader point
the most load-bearing change in the 2026 threat landscape is not a new malware family or a new cve class. it is the acknowledgement that the ai developer ecosystem is now producing enough software fast enough that its security maturity has diverged from the enterprise standard that surrounds it. projects like marimo are widely adopted, shipped by small teams with thin security review, deployed by users who are thinking about notebooks and not about public attack surface.
cve-2026-39987 on kev is a symbolic moment. cisa is now applying federal deadline pressure to the ai-tooling supply chain the way it has been applying pressure to fortinet, citrix and cisco for years. expect more.
the three-minute credential-theft window tells you the economics. tooling that carries cloud keys by default, exposed with pre-auth rce, becomes a renewable resource for attackers. fix the deployment posture, not just the cve.
Gigia Tsiklauri is a Security Architect and founder of Infosec.ge. Get in touch if you want to talk through your own ai tooling exposure.
Related reading
On malicious-model RCEs in the same AI-tooling class: The First Practical Malicious-Model-File RCE Is Here, and It's a Jinja2 Template
On AI supply-chain compromise via trusted vendor pivots: Vercel Got Popped Through an AI Tool Nobody Was Tracking as a Vendor
On CISA KEV short-deadline dynamics: CISA's Four-Day Cisco SD-WAN Deadline Is Your Deadline Too