Hugging Face LeRobot RCE: Pickle, nosec, and CVE-2026-25874
In 2022, Hugging Face introduced safetensors as a safer alternative to Python's pickle for model serialization, explicitly to avoid code-execution bugs tied to untrusted deserialization.
Four years later, their robotics framework LeRobot exposes a critical remote code execution (RCE) vulnerability: it uses pickle.loads() over an unauthenticated, unencrypted gRPC port, with # nosec comments suppressing linter warnings. This is tracked as CVE-2026-25874 with a CVSS score of 9.3 and is currently unpatched.
---
how the vulnerability works
LeRobot is Hugging Face’s open-source framework for training and deploying AI in real-world robotics. the PolicyServer and RobotClient components communicate over gRPC. the gRPC server uses add_insecure_port() — no TLS, no authentication, no certificate validation.
anyone who can reach the port over the network can connect and send data. the data gets deserialized with Python’s native pickle module. pickle deserialization is inherently dangerous: the deserializer executes arbitrary code embedded in the payload. there is no way to safely deserialize pickle from an untrusted source.
an attacker sends a crafted pickle payload to the gRPC port and gets code execution in the inference process context. inference processes typically run with elevated privileges: GPU access, access to model weights and datasets, and credentials for storage backends and model registries.
from there: lateral movement across the internal network, exfiltration of Hugging Face API keys, corruption of model weights, and if the inference server is attached to physical hardware, potential disruption of physical operations.
the nosec problem
this is the part that bothers me more than the vulnerability itself.
the Python linter flags pickle.loads() as dangerous. this is a well-known warning. the LeRobot developers apparently saw it and added # nosec comments directly next to the offending calls. # nosec is a directive that tells the security linter to skip this specific line.
they knew. and they suppressed the warning anyway.
this is not incompetence. this is a decision. someone made a conscious choice to use the unsafe format, saw the automated warning, and chose to silence it rather than use safetensors — which their own organization built — or any other safe alternative. i’m not interested in attributing malice. but i am interested in what it says about how security feedback loops work in fast-moving ML projects: the linter is doing its job, the warning is clear, and the response is to turn off the alarm.
the blast radius
if you’re running LeRobot in a production or research environment, the exposure depends on network configuration. the attack requires network access to the gRPC port. that means:
- if the port is on a machine connected to a shared network segment, every machine on that segment can reach it
- if the machine is on a cloud environment with permissive security groups, anyone who can reach your cloud can reach the port
- if the inference server is running inside a corporate network, any compromised endpoint on that network can reach it
the fix planned for v0.6.0 has no confirmed release date. there is no patch available now.
what to do right now
- block the gRPC port at the network level. do not expose LeRobot inference servers to any network you do not fully control.
- put inference servers behind access controls: VPN, private subnets, or dedicated network segments with explicit allowlists.
- if you cannot take the server off the network immediately, consider whether the operational dependency is worth the exposure until the patch is available.
more broadly: this is a good moment to audit every AI framework running in your environment for unauthenticated service ports. LeRobot is not the only ML project using gRPC with no authentication. LMDeploy had an SSRF exploited within 12 hours of advisory publication last week using a similar AI-infra-as-attack-surface pattern. the category is established.
Gigia Tsiklauri is a Security Architect and founder of Infosec.ge. Reach out here if you’re building AI infrastructure and want to make sure you’re not adding # nosec to the wrong things.
Related Reading
→ Comment and Control: The AI Agent Attack Surface
→ Indirect Prompt Injection: LLM Attacks in Production
→ Microsoft Entra Agent ID: What It Means for Identity Security