JFrog Security Research
reports that xinference, Xorbits' open-source model-serving
platform, was compromised on PyPI. Versions 2.6.0, 2.6.1, and 2.6.2
contained a malicious payload injected into the package's top-level
__init__.py, meaning it executed on any
import xinference. The affected releases have been yanked.
JFrog ties the incident to the broader TeamPCP campaign
(LiteLLM and Telnyx) based on the actor marker and payload structure;
TeamPCP has since denied involvement on Twitter and claimed a copycat
reused their name. Operator-facing guidance doesn't change either way, if
you installed 2.6.0–2.6.2, treat the host as compromised.
subprocess.Popen
that spawns a new Python interpreter with stdout/stderr suppressed, hiding
execution from the main application process. Check egress logs,
NetFlow/DNS, and outbound TLS records for any connection to the exfil
host; check request logs anywhere you might capture them for the
X-QT-SR header.
The staged payload profiles the host and collects SSH keys, Git
credentials, AWS material (including IMDS and Secrets Manager / SSM
enumeration), Kubernetes tokens, Docker auth, .env files,
and TLS keys, then exfiltrates a love.tar.gz
archive to the host above with the X-QT-SR: 14 header.
Importantly, JFrog's analysis states this sample showed no persistence mechanism, no backdoor, no reverse shell, and no privilege escalation. That's a meaningful difference from the earlier LiteLLM and Telnyx TeamPCP payloads, which did install persistence.
Reported by JFrog Security Research: three xinference
releases (2.6.0–2.6.2) contained a trojanized __init__.py;
the staged payload harvests the credential classes above and exfiltrates
to the host listed in the IOC block. Maintainers yanked the affected
versions after user-reported suspicious behavior. PyPI currently lists
2.5.0 as the latest available release.
Attribution nuance: JFrog ties this to the broader
TeamPCP campaign based on the # hacked by teampcp marker and
structural similarity to the LiteLLM and Telnyx compromises. In a later
update, JFrog noted that TeamPCP has publicly denied involvement
on Twitter and claimed a copycat reused their name and payload
structure. JFrog still associates the incident with the campaign; treat
the specific-actor attribution as contested rather than settled.
Unclear: how the attacker obtained xinference's PyPI
publishing credentials, whether the upstream GitHub repository was also
touched, whether any other Xorbits packages (xagent,
xoscar, xorbits) are implicated, and (given
the copycat claim) the identity of the operator.
Why xinference matters
xinference is a distributed model-serving framework, an OpenAI-compatible API server built on FastAPI and the xoscar actor framework, running LLMs and multimodal models across laptops, single servers, or Kubernetes clusters. It integrates with LangChain, LlamaIndex, Dify, and Chatbox as a drop-in model backend. If you've deployed a local/private LLM stack in the last 18 months, there's a non-trivial chance xinference is in it.
Like LiteLLM before it, xinference sits at a credential-rich layer: it holds API keys for upstream model providers, JWT secrets for its own authorization, model weights, cluster coordination state, and whatever environment variables the operator feeds it. A compromise here isn't just "one Python package", it's a foothold into the inference tier of an AI stack. That's the target profile TeamPCP has been picking consistently.
How this fits the campaign
This is the third confirmed TeamPCP PyPI hit in roughly a month:
-
March 24, LiteLLM 1.82.7 and 1.82.8. Backdoored
proxy_server.pyplus a.pthfile that ran the payload on any Python startup, not just on import. Quarantined by PyPI after ~3 hours. -
March 27, Telnyx Python SDK 4.87.1 and 4.87.2. Code
injected into
_client.py; payload delivered via WAV-file steganography. Added Windows persistence to the TeamPCP toolkit. Quarantined after ~6.5 hours. -
April 22, xinference 2.6.0, 2.6.1, 2.6.2. Injection
point moved to
__init__.py, which fires the payload on every import surface the package has, CLI, server startup, any library that transitively resolves it. Yanked by maintainers.
Common thread: hijack legitimate packages (not typosquats), inject into a module that's guaranteed to run, and exfiltrate from a position that's already privileged. The per-package tradecraft changes; the targeting logic doesn't.
Am I affected?
Run pip show xinference in every environment where you
might have it, dev laptops, CI runners, model-serving hosts, containers,
any Kubernetes namespace running xinference workers. If the reported
version is 2.6.0, 2.6.1, or 2.6.2, treat the host as compromised.
Also check indirect paths. xinference is integrated with LangChain,
LlamaIndex, Dify, and Chatbox as a model backend, those integrations
don't automatically pull xinference as a transitive dependency, but if
you deliberately installed it alongside one of them (or ran a
pip install -U during the exposure window on a project that
had it pinned loosely), it may be resident where you don't expect. An
explicit pip show xinference across every environment is
the safe check.
Bulk check (one-liner, safe to run)
On any host you want to quickly audit:
pip list --format=freeze 2>/dev/null | grep -i '^xinference=='
Or across a Kubernetes fleet: kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{" "}{.metadata.name}{"\n"}{end}' | grep -i xinference,
then exec into each and run pip show xinference.
Response, if you installed or imported 2.6.0 – 2.6.2
Based on JFrog's analysis and the pattern from the LiteLLM/Telnyx compromises, treat this as a full-credential exposure event on the affected host, not just a "bad package" problem.
-
Isolate the host. Pull it off the network or at least
block outbound traffic before you do anything else. The payload spawns
a detached subprocess with suppressed output, so the absence of an
obvious indicator on-screen means nothing. Prioritize blocking traffic
to
whereisitat[.]lucyatemysuperbox[.]space. -
Check egress logs for the exfil indicators. Search
NetFlow / DNS / proxy logs for connections to
whereisitat[.]lucyatemysuperbox[.]space. Search HTTP request logs where you have them for theX-QT-SR: 14header. A hit means exfiltration likely completed on that host. -
Uninstall the malicious version and downgrade. Run
pip uninstall xinference, then install an earlier release. PyPI currently lists2.5.0as the latest available version after the yanks, check the Xorbits project's own guidance for the recommended known-good version. Rebuild any container images that were built off the malicious versions. -
Rotate the specific credential classes JFrog identified.
The xinference payload targets: SSH private keys,
Git credentials, AWS material
(including IMDS-harvested instance creds, Secrets Manager entries,
and SSM Parameter Store enumeration), Kubernetes tokens and
kubeconfigs, Docker auth (registry
credentials in
~/.docker/config.json),.envfile contents, and TLS private keys. Everything in those categories that lived on the affected host should be considered leaked and rotated at the source. - Also rotate anything else on the host the payload could have read. API keys for upstream model providers (OpenAI, Anthropic, Bedrock, Vertex, Together, HuggingFace), xinference's own JWT signing secret, and any environment variables present at the time of import. JFrog's target list is credential-focused but not exhaustive of what any given xinference operator might have sitting in their process environment.
-
Persistence check (precautionary). JFrog's analysis
states this specific sample did not install persistence, a
backdoor, a reverse shell, or privilege escalation. That's
different from the earlier TeamPCP LiteLLM/Telnyx samples, which did.
Given the actor's history and the possibility of a copycat reusing
variants, it's still worth a one-pass audit: on Linux, check
systemctl list-units --type=servicefor unfamiliar services, check/etc/systemd/system/,~/.config/systemd/user/, crontabs, and/tmp/for recently-written files. On Windows, check the Startup folder and Scheduled Tasks. In Kubernetes, list pods inkube-systemand any namespace the cluster's admin token could reach. If nothing shows up, that's consistent with JFrog's reading of the sample. - Rebuild, don't clean. For production hosts, wipe and rebuild from known-clean media. Even without in-sample persistence, the payload had arbitrary code execution and can have done things JFrog's static analysis doesn't describe.
-
Pin dependencies going forward. Lock xinference to an
explicit known-good version in
requirements.txtorpyproject.toml, and usepip install --require-hasheswhere you can. This won't protect you against a package published with legitimate but stolen credentials, but it narrows the window. - Enable PyPI Trusted Publishers (OIDC) on any project you maintain. Not remediation for this incident, advice for anyone watching from the sidelines. The attack class these PyPI campaigns exploit is long-lived publishing tokens in CI secrets. OIDC-based trusted publishing eliminates the token.
If you ship a product that bundles or depends on xinference,
LangChain-based apps, Dify deployments, RAG pipelines that use
xinference for embeddings or inference, audit your lockfiles and build
logs for the affected versions. A pip install -U during the
exposure window could have pulled them in. Then rotate any customer
credentials that passed through that service on any day the bad version
was resident.
The broader lesson
Three PyPI packages in a month, all of them legitimate projects hijacked via stolen publishing credentials, all of them sitting in the AI-infra layer. The shared profile is specific: tools that get installed with elevated trust (security scanners, model gateways, model servers, SDKs for credential-rich APIs), that run inside CI/CD or production pipelines, and whose presence in the dependency graph is often invisible to the operator.
There isn't a clean defensive bullet point that makes this go away. The usable half of it is: assume your PyPI surface is an attacker-reachable trust boundary, pin dependencies, enable trusted publishing on anything you maintain, and monitor your CI/CD secrets as if they were production credentials, because in campaigns like this, that's exactly what they become.
If you have xinference==2.6.0, 2.6.1, or
2.6.2 anywhere in your environment: isolate, check egress
logs for whereisitat[.]lucyatemysuperbox[.]space,
downgrade, and rotate the specific credential classes JFrog flagged
(SSH, Git, AWS incl. IMDS/Secrets Manager/SSM, K8s, Docker, .env,
TLS). The payload runs on import, so don't wait for more write-ups
before acting.