Skip to content
Supply ChainAI Securitycredential-theftvulnerabilityAgentic AI

Vercel x Context.ai, Week Two: Trend Micro Names the OAuth Gap

7 min read
Share

vercel x context.ai, week two: trend micro names the oauth gap

most supply chain conversations in security still default to package registries. npm, PyPI, docker hub, github actions. that mental model has been the working framework for three or four years now. axios, codecov, log4j, the solarwinds tail, every checkmarx hit. all package supply chain.

the vercel breach is the canonical demonstration that the supply chain has expanded into a place most security teams aren't watching: oauth grants given by individual employees to third-party AI tools.

trend micro's april 23 writeup walks the full chain. it's worth pulling apart, because the platform-design commentary in their analysis is the part everyone else's coverage missed.

the chain

start at the bottom. february 2026, an employee at context.ai goes searching for roblox cheats on their work computer. they get redirected through one of the lumma stealer drive-by chains. lumma harvests their browser session, including google workspace oauth tokens.

context.ai is an AI office suite vendor. they integrate with companies' google workspace and microsoft 365 tenants to make agents that can read documents, summarize meetings, draft emails, the whole AI-productivity surface. that integration runs on oauth.

the operator now has google workspace oauth tokens for a context.ai employee. but more importantly, context.ai itself has oauth grants in customer tenants. so the operator can pivot from "compromised context.ai employee account" to "compromised oauth grant chain into customer tenants."

one of those customer tenants is vercel.

at some point in february or march, a vercel employee signed up for context.ai's AI office suite using their vercel enterprise google workspace account, and at signup they clicked through "allow all" on the oauth consent screen. that grant gave context.ai broad scope into the vercel employee's account, and through it, into vercel's enterprise google workspace.

the operator used that grant to take over the employee's individual vercel account, then maneuvered into vercel's internal systems, and from there enumerated and decrypted environment variables for a subset of vercel customer projects.

the data is on breachforums now, posted by an actor claiming to represent shinyhunters, listing customer api keys, source code, and database data. vercel says the breach may affect "hundreds of users across many organizations."

the trend micro angle

trend micro's writeup is the first one that names the structural problems instead of just narrating the attack. two issues stand out.

first, vercel's environment-variable sensitivity model. vercel lets project owners mark env vars as "sensitive" or "non-sensitive." sensitive vars are encrypted at rest. non-sensitive vars are not. that distinction is presented as a usability feature: "for env vars that aren't secret, you don't have to deal with the encryption-at-rest performance cost."

the problem is that "sensitive" is a sales-onboarding decision, made by the developer who first added the env var, often with no threat-modeling context. if you're shipping a new project, you mark the database URL as sensitive (good), but you mark the api endpoint URL as non-sensitive because, you know, it's just a URL. except that "URL" is your internal admin endpoint, and once an attacker has read access to your env vars, knowing where your admin lives is exactly the next thing they want.

trend micro's argument is that any env var readable by a holder of oauth scope into your platform is sensitive by definition, and the platform should treat non-sensitive as a misnomer. encrypt at rest by default, performance-cost that, let users opt out only with explicit threat-model justification.

second, the oauth grant model itself. trend micro names this as an "oauth gap": individual employees grant production-tier scopes to third-party AI tools at signup, those grants persist, and the org's IAM team never sees them. when context.ai gets popped, the org has no visibility into which of its employees granted scopes, what those scopes were, or what the blast radius of revoking them looks like.

this is not a vercel-specific problem. it's a systemic gap in how oauth scopes work for AI tooling. the AI category is uniquely affected because:

  • AI tools tend to ask for broad scopes (read all email, write all calendar, access all drive)
  • the value proposition is "let the AI do work for you," which justifies the broad scope to users
  • adoption is bottom-up; employees sign up individually for trial accounts, then ask the org to formalize later
  • the AI category turns over fast, so any audit you did six months ago is already stale

put together: every org running google workspace or microsoft 365 has an inventory of AI-tool oauth grants that it has never seen, with scopes that have never been threat-modeled, granted by employees who have never been told this is part of the security perimeter.

what to actually do

the temptation is to write a thousand-word policy memo about AI tool procurement. don't. here's the shortlist.

immediate (today):

  • pull the oauth grants inventory from google workspace admin console (Security > API controls > Domain-wide delegation, plus Apps > Configured apps) or microsoft 365 (Entra > Enterprise applications > User consent grants).
  • filter to anything AI-adjacent. this is where you'll find context.ai-class tools: AI office suites, AI meeting summarizers, AI email drafters, AI agent platforms, AI code assistants that integrate beyond the IDE, AI-powered "smart" anything.
  • for every grant with broad scopes (read all mail, write all calendar, access all drive, full openid + email + profile, full files.readwrite), revoke unless there is a documented business justification with named owner.

this week:

  • audit any platform-tier production env vars and determine which would be exposed if an oauth grant into your tenant were compromised. if you're on vercel, render, fly.io, netlify, or any platform-as-a-service, this is your blast radius. encrypt at rest by default, demote any "non-sensitive" markings on env vars touching production identity, payment, or admin paths.
  • write a one-page oauth-grant policy. minimum content: who can grant what scopes (broad scopes require IAM team review); how often grants get re-reviewed (every 90 days for AI tools); what happens when a vendor has a breach (immediate revocation pending breach scope confirmation).
  • add AI tool oauth grants as a category to your vendor risk register. populate the existing entries from the inventory above.

this month:

  • look at your AI tooling adoption pattern from the bottom up. which teams are using which tools? which scopes have been granted? are there teams running AI productivity workflows that the security team doesn't know about? if yes, fix the visibility gap before you fix the scope gap.
  • engage with your platform-as-a-service vendors on env-var-encryption-at-rest defaults. if you're on vercel and you have non-sensitive env vars in production environments, ask for a roadmap commitment to defaults. if you're on render or fly or netlify, do the same.
  • for any AI tool you keep after the audit, verify the vendor has a security disclosure policy, a breach notification SLA in their contract, and is willing to share their last penetration test on request. if any of those three are missing, that vendor is a context.ai-class risk waiting to happen.

the underlying point

the supply chain conversation has been about packages for so long that "supply chain" and "package registry" feel like synonyms in most security teams' working vocabulary. the vercel breach is the moment that vocabulary stops being adequate.

the supply chain is now: packages, plus base images, plus models on hugging face, plus oauth grants to third-party AI tools, plus the security posture of every saas platform your team has signed up for individually. all of those have first-class blast radius into your environment. all of them are routinely under-inventoried and under-scoped.

an oauth grant for a tool you didn't know existed, given by an employee whose name you don't know, with a scope nobody on your security team has read, is part of your perimeter. context.ai got popped because of a roblox cheat search and it took down environment variables across hundreds of vercel customer organizations.

if you don't know what AI tool oauth grants live in your tenant right now, that's the work for monday.

Gigia Tsiklauri is a Security Architect and founder of Infosec.ge. Get in touch if you're auditing oauth grants for AI tools and want help scoping the work.

Related reading

Vercel Got Popped Through an AI Tool Nobody Was Tracking as a VendorPortableText [components.type] is missing "span"

Marimo on KEV and the AI Supply Chain Has ArrivedPortableText [components.type] is missing "span"

Your AI Stack Has a Supply Chain Problem - and TeamPCP Just Proved ItPortableText [components.type] is missing "span"