HIGHSecurity · Supply-chain · OAuth · April 2026 Updated 23 Apr

Vercel × Context.ai Breach, What to Check, What to Rotate

By NewMaxx /April 20, 2026·updated April 23, 2026

An infostealer infection at Context.ai in February 2026 (now attributed by Hudson Rock to Lumma Stealer delivered via a Roblox cheat download) led to the compromise of OAuth tokens from Context.ai's legitimate Workspace integration. An attacker used one of those tokens to take over a Vercel employee's Google Workspace account and pivot into Vercel's environments. If your Workspace tenant authorized the same integration, or if you run workloads on Vercel, there's work to do today.

Indicator of Compromise, Google OAuth Client ID
110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
Tap to select. Search this string in your Workspace audit logs and OAuth app inventory. Per researcher Jaime Blasco (Nudge Security), the OAuth grant was tied to a now-removed Context.ai Chrome browser extension; Google has since deleted the underlying account.
Status snapshot, 23 April 2026

Confirmed by Vercel (April 19–20): a limited subset of customers had non-sensitive (plaintext-decryptable) environment variables enumerated and decrypted by the attacker. Vercel contacted that initial set of customers directly. After working with GitHub, Microsoft, npm, and Socket, Vercel has confirmed no Vercel-published npm packages were tampered with, Next.js and Turbopack are clean. Vercel has engaged Mandiant and additional cybersecurity firms, notified law enforcement, and assesses the attacker as "highly sophisticated" based on their operational velocity and in-depth understanding of Vercel's product API surface.

New in the April 23 update: after expanding the investigation to cover additional IOCs and reviewing requests to the Vercel network plus environment-variable read events in the logs, Vercel identified (1) a small number of additional accounts compromised as part of this incident, and (2) a small number of customer accounts with evidence of prior compromise independent of and predating this incident, potentially from social engineering, malware, or other methods. Both sets have been notified directly.

Reported but unconfirmed: a threat actor using the "ShinyHunters" name posted on BreachForums on April 19 offering Vercel databases, access keys, employee accounts, and source code for $2 million. The post was subsequently deleted. The real ShinyHunters group publicly denied involvement from their leak site and said BreachForums has been operated by impostors since the FBI seizure on October 10, 2025. Vercel told TechCrunch it has received no ransom communication from the threat actor. Treat the $2M figure as an impersonator's asking price, not a confirmed transaction.

Likely root cause (Hudson Rock research): a Context.ai employee was infected with Lumma Stealer in February 2026 after downloading Roblox "auto-farm" scripts, a known Lumma vector. The stealer harvested Google Workspace credentials plus logins for Supabase, Datadog, Authkit, and a support@context.ai account, providing the foothold that escalated into Vercel's environment.


How the attack chain actually worked

Public reporting now lets us trace the chain end-to-end. Each step expanded the blast radius without requiring a new exploit, every escalation rode on legitimate access.

  1. February 2026, Infostealer infection. A Context.ai employee downloaded Roblox cheat scripts on a work machine. The download carried Lumma Stealer, which exfiltrated browser-stored corporate credentials (Google Workspace, Supabase, Datadog, Authkit, support inbox). Source: Hudson Rock, based on infected-machine browser history and stealer logs.
  2. March 2026, Context.ai's AWS environment was compromised. Context.ai detected and blocked unauthorized access at the time, hired CrowdStrike, and notified one customer. They did not initially disclose it publicly, and did not realize OAuth tokens for consumer users had also been compromised.
  3. OAuth token abuse into Vercel. A Vercel employee had previously installed the Context.ai browser extension (per OX Security analysis) and signed in with their enterprise Google account, granting broad scopes. The attacker used that compromised OAuth token to access the employee's Vercel Google Workspace account.
  4. Vercel environment enumeration and decryption. From the hijacked Workspace account, the attacker pivoted into the employee's Vercel account, then into a Vercel environment, and maneuvered through systems to enumerate and decrypt environment variables that were not marked "sensitive." Per Vercel: sensitive env vars are stored in a manner that prevents them from being read, and there is no evidence those values were accessed. Vercel's April 23 update uses the phrase "enumerate and decrypt" explicitly, the enumeration path included a decrypt step, not merely reading already-plaintext values.
  5. April 19, Public disclosure and sale attempt. Vercel published its bulletin; an impersonator listed the alleged data on BreachForums for $2M. Real ShinyHunters disowned it; the listing was pulled.

Am I affected?

Two independent exposure paths. Check both, they overlap but aren't the same.

Path A, Google Workspace / Personal Google Accounts

Any org or individual who authorized the Context.ai "AI Office Suite" (or a related Context.ai OAuth integration) against a Google account. Context.ai says hundreds of accounts may be affected across many organizations.

Path B, Vercel Customers

Vercel has directly notified customers whose non-sensitive environment variables were enumerated and decrypted. As of the April 23 update, Vercel has expanded that notification set twice, once for additional accounts found to have been compromised as part of this incident after a broader log review, and separately for a small number of accounts with evidence of compromise that predates and is independent of this incident (from social engineering, malware, or other methods).

If you weren't contacted, there's no current indication you were in any of those sets, but Vercel's investigation is ongoing and auditing your own environment is still prudent. Vercel has explicitly noted that deleting projects or your account is not sufficient to eliminate risk, compromised secrets remain valid against your production systems until rotated at the source.


Google Workspace Response

Do this first

Search for the OAuth client ID above in your Workspace OAuth app inventory. If it appears, revoke it, pull audit logs, and treat any data accessible to its granted scopes as exposed.

As a Workspace Admin

  1. Audit connected OAuth apps. Go to admin.google.com → Security → Access and data control → API controls → Manage App Access. Search for the Client ID. If present, set the app to Blocked across the whole org, then revoke affected users' third-party app access, their existing OAuth tokens are not automatically invalidated by blocking the app alone. Export the full app inventory while you're in there. Review every app with broad scopes, this is the audit you've been putting off.
  2. Pull audit logs. Reporting → Audit and investigation → OAuth log events. Filter by the client ID and by "Context.ai". Look back to early March 2026 at minimum. Also check Login events, Drive events, and Gmail events for users who granted the app.
  3. Force re-authentication for exposed users. Reset passwords, require 2SV re-verification, and sign out active web sessions. Note that a password reset revokes some OAuth tokens automatically but not all, you may still need to explicitly revoke each affected user's connected third-party app grants from their account.
  4. Tighten going forward. Under API controls, set Unconfigured third-party apps to restricted by default. Explicitly mark reviewed apps as Trusted, Limited (specific Google services), or Blocked. Require admin approval for any new app requesting sensitive scopes on Gmail, Drive, or Calendar. This is the control that would have prevented the original broad OAuth grant that started this mess.

As an Individual Google User

  1. Open myaccount.google.com/permissions, Security → Your connections to third-party apps & services.
  2. Look for Context.ai, AI Office Suite, or anything you don't recognize. Click Remove access.
  3. Check myaccount.google.com/notifications and the Security activity page for unfamiliar sign-ins since March 2026. Sign out other sessions.
  4. If you used Context.ai, assume data the app could read (Drive files, Gmail content, calendar, etc., depending on scopes granted) should be treated as exposed.

Vercel Response

These are Vercel's published recommendations as of the April 23 bulletin update, in priority order. The order matters: Vercel elevated MFA to a top action item in the April 20 update and has expanded customer notifications twice in the April 23 update.

  1. Enable multi-factor authentication on your Vercel account. Use an authenticator app or a passkey from vercel.com/account/settings/authentication. Vercel added this as a top-priority recommendation in the April 20 update; it's the single best defense against the credential-replay risk created by this incident.
  2. Review your activity log. Look for unfamiliar deployments, env var edits, team-member additions, or token creation. Dashboard: vercel.com/activity-log  ·  CLI: vercel activity
  3. Rotate every non-sensitive env var that holds a secret. API keys, database creds, tokens, signing keys, webhook secrets, if it wasn't marked "sensitive," treat it as potentially exposed and rotate at the source as a priority (this matches Vercel's own current wording). Per Vercel: deleting your project or account does not eliminate the risk, compromised secrets stay valid against production until rotated. Old deployments keep using the old values. Rotate AND redeploy, or the stale creds stay live.
  4. Re-save rotated secrets as Sensitive env vars. Vercel has now defaulted new env var creation to "sensitive: on," but existing values must be explicitly migrated. Sensitive values are encrypted and can't be read back from the dashboard or API.
  5. Audit recent deployments. Anything unexpected, anything you didn't trigger, anything with an unusual commit message or author, investigate, and delete if in doubt.
  6. Check Deployment Protection. Set to at least Standard on all projects. If you use bypass-for-automation tokens, rotate them.
  7. Cascade the rotation. Any secret that lived on Vercel and is also used elsewhere, GitHub, Stripe, Supabase, internal services, rotate there too and check those audit logs. npm tokens stored on Vercel are still in scope for rotation, but see the npm-clean note below, Vercel's own published packages were verified untampered.
Confirmed clean, supply chain

Vercel worked with GitHub, Microsoft, npm, and Socket and confirmed in the April 20 bulletin update that no Vercel-published npm packages , Next.js, Turbopack, the AI SDK, and others, were tampered with. The ecosystem-wide supply chain attack the BreachForums seller hyped did not materialize. The risk to your own stored npm tokens is unchanged (rotate them), but you don't need to assume Next.js or Turbopack are compromised at the package level.

Why redeploy matters

In Vercel, rotating an env var only applies to future builds. Old deployments keep running with the old value baked in. If an attacker holds the old secret, an unrotated old deployment is a live backdoor. Rotate the secret, update it in Vercel, then redeploy every environment (production + preview + dev).


The Broader Lesson

Now that the origin is public, the chain reads like a security-awareness training slide nobody wanted: an employee personal-use download (a Roblox cheat) on a work machine led to an infostealer infection at one company, which led to OAuth token compromise, which led to enterprise Workspace takeover at a different company, which led to platform-level secrets exposure across a third tier of customers. Five companies' security postures stacked in a single chain.

The actionable lessons are unchanged but reinforced: separate personal browsing from work-credential surfaces, enforce MFA everywhere (Vercel just made it their #1 recommendation for a reason), audit OAuth grants on every SaaS tenant you own, and assume any third-party tool with broad scopes against your Workspace is a potential foothold rather than a productivity convenience. While you're already in the Workspace admin console, export your full OAuth app inventory and apply the same audit to GitHub OAuth apps, Slack apps, and Microsoft 365 enterprise apps.

One-line takeaway

Revoke the IOC app, enable MFA on Vercel, rotate non-sensitive env vars + redeploy, re-save them as sensitive, and audit every OAuth grant on every SaaS tenant you own. The attack path is reusable, and now that the Lumma origin is public, expect copycats.

All Bulletins ↑ Primary source: Vercel bulletin →