Skip to content
AI SecurityAgentic AIllm-security

AI App Misconfigurations Are the New CVE

4 min read
Share

A lot of the security conversation around AI deployments focuses on prompt injection, jailbreaks, and model vulnerabilities. Those are real. But Microsoft's May 14 primary research from Defender for Cloud found something more immediate: most cloud AI workload exploitations are not using any of those techniques. Attackers are walking in through the front door because nobody locked it.

More than half of cloud AI workload exploitations stem from exploitable misconfigurations, not unpatched CVEs.

What an exploitable misconfiguration looks like in practice

Microsoft defines an exploitable misconfiguration as the combination of two conditions: public exposure (an internet-reachable user interface or API) and missing or weak authentication.

Neither condition alone is automatically catastrophic. A public API with strong authentication is fine. An internally-accessible service with no auth may be acceptable on a private network. The exploit condition is the combination.

When an AI service is publicly reachable with no authentication, the outcomes Microsoft Defender for Cloud documented include remote code execution, credential theft, and direct access to sensitive internal tools and data. The AI service is not just a chat interface. It is a tool-calling agent with access to internal APIs, databases, and file systems. Getting past the front door means getting access to everything the agent can reach.

This is happening at scale

On the same day Microsoft published its research, a separate team released findings from scanning over 1 million publicly exposed AI services using certificate transparency logs. Their conclusion: AI infrastructure was more misconfigured than any software category they had previously investigated.

The pattern is consistent: fresh deployments with no authentication on high-privilege admin accounts, hardcoded credentials in Docker configurations, user conversations exposed to anonymous access, internal company tooling reachable without any login.

The common explanation for this is speed. AI teams are under pressure to ship and demonstrate value quickly. Security configuration is a second step, and sometimes the second step does not happen.

Why this is worse than a typical misconfiguration

A misconfigured nginx instance leaks documents. A misconfigured AI agent with tool-calling access leaks the agent's entire permission scope. If the agent has read access to your email system, internal Confluence, Slack, and code repositories, a walk-in misconfiguration gives an attacker all of that without exploiting a single CVE.

Agentic systems make the blast radius of a misconfigured authentication layer much larger than traditional web application exposure.

The fix is not complicated

Require authentication before any AI service is internet-reachable. No exceptions for development environments that are not really production yet.

Apply least-privilege to agent tool scopes. The agent should only be able to reach the tools it actually needs for its function. A customer-facing support agent does not need read access to internal HR documents.

Inventory every publicly accessible endpoint. Certificate transparency logs, cloud asset inventories, and external scanner results give you a picture of what is actually reachable from the internet. The 1-million-service scan used public CT logs. Your own exposure is findable with the same method.

Run your own scan before attackers do. Microsoft Defender for Cloud, Wiz, and similar tools can surface publicly exposed AI services in your cloud environment. This is a quick win.

The precedent this sets

The AI security conversation for the past two years has been about model behavior: will the model do what you do not want it to? The 2026 operational reality is that the model behavior question is secondary. Attackers are not trying to jailbreak a model with no authentication. They are just using it.

Configuration is the attack surface. The model is just the tool that runs after access is established.

Gigia Tsiklauri is a Security Architect and founder of Infosec.ge. Get in touch if your team is deploying AI agents and you want a security review of the access model.