HIGHSecurity · Supply-chain · npm + PyPI · Developer tooling / AI/ML · May 2026 CVE-2026-45321 · CVSS 9.6 · Exploited

Mini Shai-Hulud May 11 Wave (CVE-2026-45321): SLSA-Attested npm Worm, Wiper on Token Revoke, IDE Persistence

By NewMaxx /May 12, 2026

StepSecurity, Wiz, Socket, and Snyk have documented a coordinated supply-chain attack on May 11-12, 2026 by the threat group TeamPCP, the cluster behind the prior Bitwarden CLI, SAP CAP, and Lightning PyPI compromises. The new wave hit 42 packages and 84 versions in the @tanstack namespace (entry point: @tanstack/react-router, ~12.7M weekly downloads), then self-propagated across at least 169 packages and 373 malicious package-versions on npm and PyPI (Aikido and Snyk tracking as of 2026-05-12; counts are tracker-dependent and still moving), including @uipath/* (65 packages), @mistralai/mistralai npm, mistralai 2.4.6 on PyPI, @opensearch-project/opensearch npm, guardrails-ai on PyPI, @squawk/*, and @draftlab/*. The TanStack compromise has CVE CVE-2026-45321, CVSS 9.6. It is the first documented npm worm whose malicious packages carry valid SLSA Build Level 3 provenance attestations: the attacker chained a pull_request_target "Pwn Request," GitHub Actions cache poisoning, and OIDC token extraction from the runner's process memory to make TanStack's own legitimate release pipeline publish the malicious versions. The payload installs a destructive failsafe (a gh-token-monitor daemon that runs rm -rf ~/ when it sees a 40X response to the stolen GitHub token), plus IDE-level persistence in .claude/settings.json and .vscode/tasks.json that survives npm uninstall. Audit lockfiles for any affected version installed on or after 2026-05-11T19:20 UTC, hunt the gh-token-monitor daemon and disarm it before revoking any tokens, and rotate every credential reachable from any affected host.

Framing note: this bulletin covers the May 11-12 wave specifically. The broader Shai-Hulud / Mini Shai-Hulud family has been running since September 2025 (Trivy compromise, original SAP CAP wave, Bitwarden CLI, Lightning PyPI). The TanStack incident is operationally novel for three reasons that justify a fresh bulletin rather than an update to the Lightning one: the SLSA-attested OIDC hijack (no credential theft was needed; the legitimate pipeline was hijacked in-flight), the dead-man's-switch wiper which changes the safe response ordering, and the cross-namespace self-propagation that reached UiPath and DraftLab within hours of patient zero. Mistral AI's PyPI compromise (mistralai==2.4.6) is covered in detail in the standalone mistralai PyPI bulletin; that package is also part of this campaign, attributed to TeamPCP by Wiz, but uses a different (Linux-only, transformers.pyz dropper) payload than the npm side.

If you only do one thing

Search every developer workstation and every CI host for ~/.local/bin/gh-token-monitor.sh, ~/.config/gh-token-monitor/, ~/Library/LaunchAgents/com.user.gh-token-monitor.plist, and ~/.config/systemd/user/gh-token-monitor.service. If any of those exist, that host saw the payload and is armed for the wiper. Disable the daemon (LaunchAgent unload / systemctl --user stop) and delete the files before revoking any GitHub tokens. Then proceed to the full response steps below.

Why this is a drop-everything bulletin

Three reasons this gets the HIGH framing in our project even though the affected versions have been deprecated and pulled from npm where registry policy allows. First, the wiper is a primary-source confirmed feature, not a research curiosity: Wiz's analysis describes a gh-token-monitor service that polls api.github.com with the stolen token every 60 seconds and runs rm -rf ~/ on receiving a 40X. The daemon auto-exits after 24 hours, so the exposure window for host-side response is small but real, and the standard "revoke first" reflex from prior Shai-Hulud waves now actively triggers the destructive routine instead of cutting off the attacker. Second, npm uninstall is insufficient: the payload writes setup.mjs and router_runtime.js into ~/.claude/ and ~/.vscode/ and rewrites .claude/settings.json (SessionStart hook) and .vscode/tasks.json (folderOpen task) so opening the project in Claude Code or VS Code re-executes the payload long after the npm package has been removed. Third, SLSA provenance verification did not catch this: the attestations are mathematically valid and correctly attest that TanStack's legitimate release.yml workflow on refs/heads/main produced the artifacts. Build-pipeline trust is now a supply-chain attack surface, not a mitigation.


Affected packages and versions

Confirmed compromised, May 11 19:20 UTC and after
npm @tanstack/* (42 packages, 84 versions; entry point @tanstack/react-router) @uipath/* (~65 packages; including @uipath/apollo-core, CLI, agent SDKs) @mistralai/mistralai (official TypeScript client) @opensearch-project/opensearch@3.6.2 @squawk/* (~20 packages, 5 malicious versions across them) @draftlab/* (downstream propagation) @tallyui/* (downstream propagation, per StepSecurity) PyPI mistralai==2.4.6 (Linux-only payload; see standalone bulletin) guardrails-ai==0.10.1 Aikido and StepSecurity tracking lists name approximately 169 distinct package names and 373 unique malicious package-versions across the wave. The list above is the high-signal subset for typical dev / CI / ML environments; consult StepSecurity and Wiz for the full enumeration before declaring a tree clean.
Sources: StepSecurity (May 12), Wiz (May 12 11AM UTC update), Snyk Security Database (CVE-2026-45321), Aikido, Socket, OSSF Malicious Packages DB (MAL-2026-3432, GHSA-3q49-cfcf-g5fm). The maintainers' clean releases are out for TanStack; verify against the affected-versions table at the StepSecurity link before un-pinning anything. Discrepancies between aggregator reports (170, 169, 172, 175) come from how each tracker counts "package + version" vs "unique package name."
Payload hashes (npm side)
router_init.js (embedded in all @tanstack packages, 2.3 MB) SHA-256: ab4fcadaec49c03278063dd269ea5eef82d24f2124a8e15d7b90f2fa8601266c SHA-1: 12ed9a3c1f73617aefdb740480695c04405d7b4b tanstack_runner.js (from the malicious git fork) SHA-256: 2ec78d556d696e208927cc503d48e4b5eb56b31abc2870c2ed2e98d6be27fc96 SHA-1: e7d582b98ca80690883175470e96f703ef6dc497
Source: StepSecurity, with corroboration in Socket's reverse engineering. The hashes are the canonical detection points for host-side scanning. PyPI mistralai 2.4.6 uses a different second-stage (transformers.pyz from 83.142.209.194); see the standalone mistralai bulletin for those hashes.
Persistence and exfiltration paths
IDE persistence (written into project repos and / or $HOME) .claude/settings.json (SessionStart hook, re-runs on Claude Code start) .claude/router_runtime.js .claude/setup.mjs .vscode/tasks.json (folderOpen task, re-runs on opening project in VS Code) .vscode/setup.mjs Dead-man's-switch daemon (developer workstations only) macOS: ~/Library/LaunchAgents/com.user.gh-token-monitor.plist Linux: ~/.config/systemd/user/gh-token-monitor.service Helper: ~/.local/bin/gh-token-monitor.sh Config: ~/.config/gh-token-monitor/ Trigger: HTTP 40x response from api.github.com with the stolen token Action: rm -rf ~/ (24-hour daemon TTL, auto-exits after that) Cloud-metadata probes 169.254.169.254 (AWS IMDS, including v2) 169.254.170.2 (ECS/Fargate task metadata) 127.0.0.1:8200 (local HashiCorp Vault) Exfiltration channels (three, redundant) filev2.getsession.org (Session messenger / Oxen network) seed1.getsession.org (Session seed node; cert-pinned, valid to 2033) seed2.getsession.org seed3.getsession.org GitHub API dead-drops (Dune-themed repo names; description "Shai-Hulud: Here We Go Again") git-tanstack.com (typosquat C2) api.masscan.cloud (additional C2)
Source: Wiz (primary, with named LaunchAgent and systemd unit paths), StepSecurity, Socket (the IDE persistence detail and AWS IMDSv2 / Vault address). Block *.getsession.org, git-tanstack.com, and api.masscan.cloud at DNS/proxy for the cleanup window. The Session network is legitimate privacy infrastructure used by the malware as a takedown-resistant exfil channel; if your environment doesn't legitimately use Session, blocking it is low-risk.

Priorities, by environment

Triage on three axes: did any affected version land in a lockfile or cache during the exposure window (2026-05-11T19:20 UTC onward), did the install happen on a host with credential reach (developer workstation, CI runner, or build container with cloud metadata access), and is there a writable npm token reachable from that host whose maintainer scope could let the worm propagate further.

Priority Workload Why
Highest Developer workstation that opened or ran npm install on any affected package on or after 2026-05-11T19:20 UTC This is the dead-man's-switch population. The gh-token-monitor daemon may be live on disk. Follow the response steps in order: disarm the daemon before revoking any GitHub tokens. Then rotate every credential reachable from the host, audit .claude/ and .vscode/ in every project that was opened in either editor, and consider re-imaging once forensics are captured.
Higher CI runner that resolved an affected package (transitively counts) The payload's primary mechanism on CI is OIDC token extraction from the runner process memory; if id-token write was scoped broadly, the worm may have already minted a publish token for every package the workflow had publishing rights to. Assume every package the runner could publish was poisoned; audit npm publish history for unexpected version bumps during the exposure window. The wiper daemon does not run on transient CI runners (per Wiz, it's a workstation persistence layer), but the credential exfil did, so rotate everything the runner could see.
Higher Any npm package you publish whose release pipeline uses OIDC trusted publishing without workflow + branch pinning Independent of whether you installed an affected package, the class of vulnerability behind CVE-2026-45321 is "trusted publisher accepts any workflow on any branch." Per Snyk, the secure configuration pins both Workflow: .github/workflows/release.yml and Branch: refs/heads/main on the trusted publisher record. Audit your npm trusted-publisher settings and tighten them.
Medium Container images built since 2026-05-11 that pulled any affected package transitively The payload executed at install time in the build container. Static build environments typically have less credential reach than a CI runner with OIDC, but they may have cloud build credentials, registry tokens, or signing keys baked in. Rebuild from a known-good lockfile pinned below the affected versions, rotate any credentials the build environment could touch, and verify the image cache doesn't retain a poisoned layer.
Medium Projects that consume @tanstack/*, @uipath/*, @mistralai/*, or other affected scopes but pin via lockfile and did not run npm install or npm update during the window Lockfile-pinned environments that did not resolve a new version during the malicious window are not directly affected. Verify against your lockfile: search for any affected package + version combination. If clean, you don't need to rotate; if any line resolved to a malicious version, escalate to the developer-workstation or CI-runner row.
Lower Environments that don't depend on any affected package, transitive or direct No action specific to this campaign. The structural hygiene items below (lockfile-only installs, npm install --ignore-scripts by default, OIDC trusted-publisher workflow + branch pinning, AI-coding-agent config in PR diffs) are recommended regardless.

Categorization source: bulletin's own framing, based on the primary-source response ordering in Wiz's writeup (disarm daemon before revoking tokens), Snyk's trusted-publisher hardening guidance, and StepSecurity's CI propagation analysis. CVE-2026-45321's CVSS 9.6 reflects the npm-side worst case (active in-the-wild self-propagation); pinned and lockfile-clean environments do not face the worst-case impact.


Am I affected?

Step one, find affected installs

In any project that may have run install commands since 2026-05-11, search the lockfile and the installed tree. On npm / pnpm / yarn:

grep -E "@tanstack/|@uipath/|@mistralai/mistralai|@opensearch-project/opensearch|@squawk/|@draftlab/" package-lock.json pnpm-lock.yaml yarn.lock 2>/dev/null
npm ls @tanstack/react-router @uipath/apollo-core @mistralai/mistralai 2>/dev/null
find node_modules -name "router_init.js" 2>/dev/null -exec shasum -a 256 {} \;

Lockfile + installed-tree scan is the high-signal check; the SHA-256 of any router_init.js file under node_modules should be matched against ab4fcadaec49c03278063dd269ea5eef82d24f2124a8e15d7b90f2fa8601266c. The npm _cacache directory stores content as hash-keyed blobs rather than named .tgz files, so a simple cache walk does not return useful results; rely on the lockfile and the installed tree.

On PyPI / pip side:

pip show mistralai 2>/dev/null | grep -E "Name|Version"
pip show guardrails-ai 2>/dev/null | grep -E "Name|Version"
find / -name "transformers.pyz" 2>/dev/null
find /tmp -newer /etc/hostname -name "*.pyz" 2>/dev/null

Step two, check for the payload on disk

Independent of lockfile content, hunt for the payload files. On Linux and macOS:

find ~ -path "*/node_modules/*router_init.js" 2>/dev/null
find ~ -path "*/.claude/*" -name "settings.json" -newer /etc/hostname 2>/dev/null
find ~ -path "*/.claude/router_runtime.js" 2>/dev/null
find ~ -path "*/.claude/setup.mjs" 2>/dev/null
find ~ -path "*/.vscode/tasks.json" -newer /etc/hostname 2>/dev/null
find ~ -path "*/.vscode/setup.mjs" 2>/dev/null
shasum -a 256 $(find ~ -name router_init.js 2>/dev/null) 2>/dev/null

If shasum output matches ab4fcadaec49c03278063dd269ea5eef82d24f2124a8e15d7b90f2fa8601266c, that file is the malicious payload. Treat the host as compromised and follow the response steps in order.

Step three, check for the dead-man's-switch daemon

ls -la ~/Library/LaunchAgents/com.user.gh-token-monitor.plist 2>/dev/null
ls -la ~/.config/systemd/user/gh-token-monitor.service 2>/dev/null
ls -la ~/.local/bin/gh-token-monitor.sh 2>/dev/null
ls -la ~/.config/gh-token-monitor/ 2>/dev/null

If any of those paths exist, the wiper is armed. Proceed to step 1 of the response sequence below before revoking any GitHub tokens, even if you are already mid-incident on this host.


Response sequence

Order matters

The steps below are deliberately sequenced so the wiper does not fire. If you revoke a stolen GitHub token before disarming the daemon, the daemon's 60-second poll receives a 40X from api.github.com and the host runs rm -rf ~/ (Wiz). The 24-hour daemon TTL means hosts compromised more than 24 hours ago have likely passed the wiper-armed window already, but on May 12-13 most affected workstations are well inside it. Capture forensics before destructive cleanup. The documented wiper trigger is token revocation specifically, not generic loss of network egress, but the safe default is still to leave the host on the network long enough to disarm the daemon and capture forensics; isolation is fine after step 1.

  1. Disarm the gh-token-monitor daemon first. On macOS, run launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/com.user.gh-token-monitor.plist, then delete the plist. On Linux, systemctl --user stop gh-token-monitor.service && systemctl --user disable gh-token-monitor.service, then delete the unit file and the helper at ~/.local/bin/gh-token-monitor.sh and the config directory at ~/.config/gh-token-monitor/. Verify with ps aux | grep gh-token-monitor that no process is still running. Only after this is confirmed does the wiper hazard close. The daemon polls every 60 seconds; you have time, but not unlimited. Disarm before touching tokens, not in parallel.
  2. Capture forensics before further destructive cleanup. Image the disk if your environment supports it, or at minimum tar czf ~/forensics-$(date -u +%Y%m%dT%H%M%SZ).tar.gz ~/.claude ~/.vscode ~/.npm ~/.bun ~/.config 2>/dev/null plus a copy of your shell history, then move that tarball off the host. The IDE persistence files and the payload binaries are evidence; deleting them without preserving copies makes downstream investigation harder.
  3. Remove IDE persistence in every project on the host. For each project directory, inspect .claude/settings.json (especially the hooks.SessionStart field) and .vscode/tasks.json (especially any task with runOn: folderOpen). Delete attacker-added entries and remove .claude/router_runtime.js, .claude/setup.mjs, .vscode/setup.mjs if present. Do the same in $HOME/.claude/ and $HOME/.vscode/. These persistence layers re-fire the payload independently of whether the malicious npm package is still installed. npm uninstall does not remove these files. They are independent of the package manager.
  4. Remove the malicious package versions and purge caches. npm uninstall the affected packages, then clean every cache that could re-serve the poisoned tarball: npm cache clean --force, pnpm store prune, yarn cache clean, bun pm cache rm. On the PyPI side, pip uninstall mistralai guardrails-ai, then pip cache purge and clear any pip wheelhouse directories you maintain. Rebuild any container images that were built off a malicious version.
  5. Now rotate credentials. Order: npm tokens first (the worm's primary propagation vector), then GitHub PATs and OIDC trust grants, then cloud (AWS access keys plus any IAM-role-issued credentials that may have been used by the payload through IMDS, GCP service-account keys, Azure credentials), then HashiCorp Vault tokens, Kubernetes service account tokens, SSH keys, and any AI / ML service credentials the host could access (HuggingFace, Anthropic, OpenAI, Weights & Biases). For npm tokens specifically: npm token list, revoke unknowns, and switch to granular access tokens scoped to specific packages. The worm uses bypass_2fa: true tokens to publish; if any of your tokens have that flag set without operational need, remove the flag. Order matters less between cloud/vault/kube than between "npm" and "GitHub"; those two are the propagation vector and the wiper trigger respectively.
  6. Hunt for downstream propagation on packages you publish. If any of your own npm packages were published from a CI run that ran during the exposure window (after 2026-05-11T19:20 UTC), the worm may have used your OIDC trust to publish a poisoned version of every package the workflow had access to. Audit your published versions for unexpected releases during the window. Run npm audit signatures on your own packages, but recognize the limit: valid SLSA provenance is not proof that the package is safe, since the worm produced valid attestations. Match against the payload SHA-256 instead.
  7. Hunt your GitHub organization for malicious activity. Search for repositories with the description "Shai-Hulud: Here We Go Again" or "A Mini Shai-Hulud has Appeared" created on or after 2026-05-11. The worm creates these as dead-drops under the victim's own GitHub account. Audit commits authored as claude@users.noreply.github.com from environments where Claude Code is not in use, and check every branch the compromised token could write to for unexpected .github/workflows/ changes, .claude/ files, or .vscode/ files. The worm uses the GitHub GraphQL createCommitOnBranch mutation, so local git history won't show clones.
  8. Lock down OIDC trusted publishing. For every npm package you publish via OIDC, the trusted-publisher configuration must pin both Repository, Workflow (e.g., .github/workflows/release.yml), and Branch (e.g., refs/heads/main). An unpinned trusted-publisher entry that accepts any workflow on any branch is vulnerable to the same class of attack. Review pull_request_target workflows in particular: any workflow that checks out fork code under base permissions is the entry vector for cache poisoning, regardless of whether your release workflow is correctly pinned.

Architectural pattern

This wave is the third operationally distinct turn in the Shai-Hulud family within twelve months. The original Shai-Hulud and SHA1-Hulud waves exploited stolen npm credentials and classic preinstall hooks; the Lightning / SAP CAP Mini Shai-Hulud wave added Bun-runtime payloads and IDE persistence; the May 11 wave drops credential theft as a prerequisite entirely, hijacking the legitimate build pipeline in flight through GitHub Actions cache poisoning and OIDC memory extraction. The published artifacts carry valid SLSA Build Level 3 provenance because Sigstore is correctly attesting that TanStack's release.yml workflow on refs/heads/main produced them. SLSA is doing exactly what it says on the tin; the attack moved up a layer to "compromise the workflow itself, then let SLSA sign the result." The defensive response is layered: pin trusted-publisher records to specific workflows and branches, treat pull_request_target with the same scrutiny as production deploy code, harden GitHub Actions cache trust boundaries, and add behavioral analysis at install time (Socket, StepSecurity, Aikido) since provenance verification alone now lets validly-attested malicious packages through.

The IDE persistence detail (.claude/settings.json SessionStart hook, .vscode/tasks.json folderOpen task) generalizes beyond this campaign. AI coding agent configuration files and editor task files are now part of the supply chain attack surface and should be reviewed in PR diffs with the same scrutiny applied to .github/workflows/. A repository that ships a .claude/settings.json with a SessionStart hook pointed at an obfuscated script is no less dangerous than a repository that ships a malicious GitHub Action; the difference is only which automation re-executes it.

The dead-man's-switch wiper is documented for the May 11 wave specifically (Wiz primary, with the gh-token-monitor daemon name, the 60-second poll interval, the 40X trigger, the 24-hour TTL). It was also documented as a feature of Shai-Hulud V2 in November 2025 (Zscaler ThreatLabz, with cipher /W on Windows and shred -uvz on Linux/macOS, triggered when both GitHub and npm egress are blocked). The two are mechanically distinct, but the operator lesson is the same: do not assume that "isolate the host and revoke credentials" is a safe first move on any Shai-Hulud family infection until you have verified there is no destructive failsafe armed locally.

Caveats and unknowns

The 24-hour wiper TTL is from Wiz's analysis of the May 11 payload; this is not a vendor-published parameter and could change in future variants. The full affected-packages enumeration is still moving as of bulletin publication; Aikido and Snyk count at least 169 packages and 373 package-versions as of 2026-05-12, while Mend's count is 172 / 403 and other trackers report 170+. The differences reflect counting methodology (whether to include downstream-only republications, whether to count PyPI alongside npm) and the still-active propagation, rather than a contested number. The bulletin does not enumerate every package by name: trim to the scopes likely to land in dev / CI / AI-ML environments and consult the primary-source lists for the full enumeration before declaring a tree clean. CISA KEV status not yet observed as of 2026-05-12; this bulletin will be updated if KEV inclusion lands. The TanStack postmortem names two of the three chained vulnerability classes (pull_request_target Pwn Request, Actions cache poisoning) as known categories with prior precedent; the runner-memory OIDC extraction step is the novel piece. Mistral AI PyPI 2.4.6, attributed by Wiz to the same TeamPCP campaign, uses a different payload than the npm side and is documented in its own standalone bulletin; the IOCs above for getsession.org, gh-token-monitor, and router_init.js do not apply to the PyPI side. Note also a small same-day-coincidence collision: on April 29 an unrelated brand-squatted tanstack npm package (unscoped, not in the @tanstack namespace) published four malicious versions in a 27-minute window for an .env exfiltration; this is a separate operator (sh20raj) and a separate campaign from the May 11 TanStack OIDC hijack, and is being conflated in some aggregator coverage.

One-line takeaway

Audit lockfiles for any of the affected scopes installed on or after 2026-05-11T19:20 UTC; on every host that resolved a malicious version, hunt and disarm the gh-token-monitor daemon before touching tokens, strip .claude/ and .vscode/ persistence, then rotate npm tokens first and everything else second; if you publish npm packages via OIDC, pin trusted-publisher records to a specific workflow on a specific branch and treat pull_request_target workflows with production scrutiny.

All Bulletins ↑ Primary source: StepSecurity →