2026

May 12, 2026

The pull_request_target Trap

How a known-dangerous GitHub Actions pattern keeps compromising npm packages. The TanStack incident, the prior cases, and the checklist that actually prevents it.

S
Sascha Becker
Author

22 min read

The pull_request_target Trap

The pull_request_target Trap

On May 11, 2026, between 19:20 and 19:26 UTC, an attacker published 84 malicious versions across 42 @tanstack/* packages on npm.1 The TanStack team caught it within roughly an hour, pulled the tarballs server side, deprecated every affected version, and published a detailed postmortem the same day. They did almost everything right after the breach. The breach itself, like several before it, came in through the same door: a GitHub Actions workflow triggered by pull_request_target that checked out and executed code from a fork.

This is not a new problem. GitHub's own Security Lab has been warning about it since at least 2020 under the name "pwn requests".2 A working list of major OSS projects compromised via this exact pattern in the last fourteen months runs to six direct victims (Nx, PostHog, Trivy, LiteLLM downstream, the prt-scan wave, and now TanStack) plus the tj-actions/changed-files action whose compromise supplied the tradecraft for almost everything that followed. The pattern keeps catching maintainers because the dangerous workflow looks benign in review, the documentation is split across half a dozen GitHub pages, and the failure mode is asymmetric: every workflow that uses it correctly looks identical to every workflow that is one cache poisoning away from giving up its publish tokens.

This post is the longer write-up I wish existed in one place. What the trigger actually does, how the TanStack attack chained three primitives to publish without ever stealing an npm token, the prior cases that telegraphed every move, and a checklist that prevents the failure mode rather than rebuilding it in fewer lines of YAML.

What pull_request_target Actually Does

Two GitHub Actions triggers fire on pull request activity. They look almost identical in YAML. They are not.

pull_request runs in the context of the PR head. It checks out the contributor's code by default. Crucially, when the PR comes from a fork, the workflow runs with a read-only GITHUB_TOKEN and no access to repository secrets.2 An attacker who opens a malicious PR to your repo can run arbitrary code on your CI runners, but they cannot write to your repository or read your secrets. The blast radius is the runner itself.

pull_request_target runs in the context of the base repository. It was introduced for a real reason: workflows like "label a PR based on file paths", "post a comment based on metadata", or "run benchmarks on the base branch with fork content" need write access to the base repo and access to secrets. GitHub's docs put the warning right at the top of the page, but the trigger is too useful to remove. As the Security Lab puts it:

The dangerous pattern emerges when a pull_request_target workflow does the one thing that breaks the model: explicitly check out and execute the PR's head code. From that moment, fork-controlled JavaScript is running on a runner that holds the base repo's GITHUB_TOKEN in memory, has access to every secret declared on the job, and can mint OIDC tokens if the job has id-token: write. The Security Lab's note:

That single sentence is the entire warning. It is also the bug that compromised TanStack, Nx, PostHog, Trivy (and LiteLLM downstream of it), tj-actions/changed-files, and the 26+ targets of the prt-scan campaign in the last fourteen months.3

The TanStack Attack, Anatomized

The TanStack postmortem is unusually detailed and worth reading in full. The short version: three vulnerabilities chained, each necessary, none alone sufficient.

Step 1: Cache Poisoning From a Fork PR

The vulnerable workflow was bundle-size.yml. From the postmortem:

The maintainers had attempted a trust split between jobs (a partial defense), but they missed two facts about GitHub Actions caches.

The first: cache writes are not gated by the workflow's declared permissions. Setting permissions: contents: read on a job prevents writes through the workflow's GITHUB_TOKEN, but actions/cache uses a separate runner-internal token for the post-job save step. A job with read-only permissions can still write to the cache. The postmortem phrases this bluntly:

The second: cache scope crosses the fork / base trust boundary. A cache entry written by a pull_request_target workflow (running in base scope) is visible to subsequent workflows on the base branch, including release.yml.

The attacker created a fork, planted ~30,000 lines of bundled JavaScript in packages/history/vite_setup.mjs, opened a PR (#7378), and let the workflow run. The benchmark job ran pnpm install and pnpm nx run @benchmarks/bundle-size:build, which invoked the malicious vite_setup.mjs. That script wrote a poisoned pnpm store into the cache under a deterministic key (Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11). The attacker then force-pushed the PR back to the main HEAD, turning it into a no-op so the diff would look clean. The cache remained poisoned.

Step 2: Restore on Push-to-Main

When TanStack maintainers merged unrelated work to main, release.yml fired and restored the pnpm cache using the same key. Now attacker-controlled binaries were executing during a workflow that had publish permissions, and the restore happened before any of the workflow's own code checked the integrity of dependencies.

Step 3: OIDC Token Extraction From Runner Memory

The release workflow had id-token: write because TanStack uses npm's OIDC trusted-publisher binding. OIDC was supposed to be the safer option compared to long-lived NODE_AUTH_TOKEN secrets: tokens are minted at publish time, scoped to the workflow, and never stored. That is true. It is also irrelevant when malicious code is running on the same runner.

The poisoned binaries located the GitHub Actions Runner.Worker process by scanning /proc/*/cmdline, then dumped its memory via /proc/<pid>/maps and /proc/<pid>/mem. They extracted the in-memory OIDC token (which the runner mints lazily when a step needs it) and sent direct POST requests to registry.npmjs.org, bypassing the Publish Packages step entirely. The postmortem notes this tradecraft is not new:

No npm tokens were stolen. The trusted-publisher binding worked exactly as designed. It just turned out that "designed correctly" and "safe to use from a workflow that touches untrusted code" are not the same property.

This Is Now a Pattern

The TanStack incident is the latest in a string of public compromises of major OSS packages, all starting with a misused pull_request_target workflow. Listed in chronological order so the trend is unmistakable.

tj-actions/changed-files, March 2025

This one is upstream of nearly all the later attacks. The compromise of tj-actions/changed-files, a popular GitHub Action used by thousands of repos, established the public playbook for extracting OIDC tokens from runner memory. The TanStack attacker re-used the same /proc/<pid>/mem technique against the same Runner.Worker process. The published writeups from that incident became, in effect, a free training corpus for the next year of compromises.

Nx, August 2025

Attackers exploited a vulnerable pull_request_target workflow in the nx repository to obtain elevated privileges and a GITHUB_TOKEN, then pushed trojanized versions of the nx package to npm.4 The payload was a credential stealer named QUIETVAULT that pulled environment variables, GitHub Personal Access Tokens, and SSH keys; novel for the time, it weaponized a locally installed LLM to scan the developer's filesystem for additional secrets. Stolen credentials let the attackers escalate from a developer token to full AWS administrator access in under 72 hours.

PostHog, November 2025

PostHog's compromise is the clearest example that the vulnerable workflow does not need to be your build pipeline. It just needs to be a workflow.7 The vulnerable file was auto-assign-reviewers.yaml, a workflow whose entire job was to assign reviewers to incoming PRs. It was modified to use pull_request_target on September 11, 2025, and remained that way for 74 days. On November 18, a malicious PR modified the assign-reviewers.js script the workflow executed; running that script with the base repo's permissions yielded the GitHub PAT of PostHog's posthog-bot account, which had broad repo write permissions.

Five days later, on November 23, the attacker used the bot PAT to push a detached commit modifying the Lint PR workflow to exfiltrate every secret declared in CI. At 04:11 UTC on November 24, eight npm packages were poisoned with Shai-Hulud 2.0: posthog-node, posthog-js, posthog-react-native, posthog-docusaurus, posthog-react-native-session-replay, @posthog/agent, @posthog/ai, and @posthog/cli. PostHog caught and pulled them within six hours.

The lesson here is sharper than the others: PostHog had no naive build-and-publish pipeline doing the dangerous thing. A reviewer assignment workflow was enough.

Trivy, February-March 2026

Aqua Security's Trivy scanner was compromised in February 2026 through a misconfigured pull_request_target workflow that allowed an actor (under the handle MegaGame10418 in some writeups, hackerbot-claw in others) to exfiltrate the aqua-bot Personal Access Token. The bot account had been scanning public repos for the same misconfiguration pattern at scale. With the stolen PAT, the attacker poisoned Trivy's GitHub Action tags in March 2026.

LiteLLM, March 2026 (the downstream case)

The LiteLLM incident matters because LiteLLM did not have a misconfigured workflow. They were not the immediate victim of a Pwn Request. They were the downstream victim of someone else's Pwn Request: they ran the (now poisoned) Trivy action in their CI/CD pipeline, and the malware sitting inside that action silently extracted LiteLLM's PYPI_PUBLISH token from the runner environment.8 With that token the attackers published litellm 1.82.7 at 10:39 UTC and 1.82.8 at 10:52 UTC on March 19, each containing a three-stage payload: credential harvesting, Kubernetes lateral movement, and a persistent backdoor.

This is the failure mode worth dwelling on. Even if you audit every workflow in your own repo, the actions you depend on can be compromised the same way, and you inherit that compromise on the next CI run. The compromise of tj-actions/changed-files and the compromise of Trivy's tags work through the same downstream-victim mechanism. Pin actions by SHA, not by tag, and the next time an upstream gets popped, your CI does not pop with it.

prt-scan Campaign, March to April 2026

Wiz documented a campaign where an attacker operating under the handle ezmtebo opened over 475 malicious pull requests across 26 hours, then over 500 in total across six waves, targeting repositories with vulnerable pull_request_target workflows.3 Two npm packages were successfully compromised. Notably, the attacker's payloads evolved from crude bash scripts in early waves to AI-generated, language-aware scripts in later ones. The cost to scan and attempt exploitation at scale is now lower than the cost of a maintainer's audit of a single workflow.

Shai-Hulud and the Wave Pattern

The TanStack compromise is also being tracked as the fourth wave of the Shai-Hulud self-propagating npm worm family.5 Prior waves: original Shai-Hulud (September 2025, 500+ packages, first self-propagating npm worm), Shai-Hulud 2.0 (November 2025, 492 packages totaling 132M monthly downloads, using preinstall hooks), Mini Shai-Hulud (April 2026, targeted SAP and Intercom ecosystems with editor-config persistence). The May 2026 wave was the first to produce valid SLSA provenance attestations for malicious packages, which is worth pausing on.

Every defensive layer added to npm in the last two years (2FA, fine-grained tokens, OIDC trusted publishing, SLSA provenance) is good. None of them prevents the workflow itself from being the attack surface. If your CI runs malicious code with publish capability, the published package is, by every cryptographic measure, legitimate.

Why "Just Set Permissions: Read" Doesn't Save You

The most common advice in tutorials is to add permissions: contents: read to pull_request_target workflows. That advice is correct, and insufficient. The TanStack workflow had restricted permissions. The maintainers had read the warnings. The compromise still happened because two things bypass the permission model entirely:

  1. Caches. As shown above, cache writes use a runner-internal token. A read-only job can still poison the cache, and the cache is shared across trust boundaries. Any workflow on main that restores from a poisoned cache executes attacker-controlled code with whatever permissions that workflow has.

  2. In-memory tokens. Permissions control what the workflow can do through the GITHUB_TOKEN. They do not control what malicious code running on the same runner can read from process memory. Once id-token: write is set, the runner mints an OIDC token in memory when needed; permissions on the calling job do not stop a co-resident attacker from scraping it.

The narrower lesson is that "read-only permissions" only restricts the most visible attack path. The broader lesson is that the trust boundary is the runner process, not the workflow's permission declaration. Anything sharing that runner with attacker-controlled code is in scope.

A Checklist That Actually Prevents This

Most "supply chain security checklists" floating around are some mix of "enable 2FA" (yes, do that), "use SBOMs" (great, but downstream of the bug), and "scan your dependencies" (necessary, also downstream). The list below is narrower: it targets the specific failure mode that compromised TanStack, Nx, Trivy, tj-actions, and the prt-scan victims. Run through it on every CI pipeline that publishes anything.

1. Treat pull_request_target as a code smell

The default answer is pull_request. The only legitimate uses of pull_request_target are the narrow ones: labelling, commenting, running checks against the base branch that explicitly do not touch PR code. If a workflow uses pull_request_target and does anything beyond those operations, the burden of proof is on the workflow author to justify it in a comment that someone else can review.

2. Never combine pull_request_target with a checkout of the PR head

This is the single rule that prevents the Pwn Request class entirely. If you must run code from the PR (for example, to compute its bundle size), do it in a pull_request workflow with no secrets and no write permissions, then hand the result off to a separate elevated workflow.

3. Use the two-workflow pattern: pull_request + workflow_run

The pattern the Security Lab recommends, and the one TanStack moved to in their hardening PR:

  • A pull_request workflow runs the untrusted code in a sandboxed context, with no secrets, no write permissions, no OIDC, and no shared cache scope. It writes its output (size diff, test results, whatever) to an artifact.
  • A workflow_run workflow triggers on completion of the first one, runs with elevated permissions, reads the artifact, and posts the comment or updates the check.

The trust boundary is now the artifact handoff, which is auditable in YAML rather than implicit in the runner's memory layout.

4. Don't trust GitHub Actions caches across trust boundaries

If a job runs untrusted code and writes to the cache, the cache is poisoned for any later job that restores from it. Three mitigations, in order of how often you should reach for each:

  • Disable cache writes from any pull_request_target or pull_request-from-fork workflow. actions/cache supports save-only and restore-only modes; use restore-only in untrusted contexts.
  • Scope cache keys by trust level. Prefix cache keys for fork PR builds with something attacker-derived (the PR author, the fork) so they cannot collide with the keys your release workflow restores from.
  • Don't share build caches between PR validation and release pipelines. The convenience win is small. The blast radius is total.

5. Pin every third-party action to a commit SHA

Floating tags (@v1, @v6.0.2) resolve at workflow run time. If the action is compromised (as tj-actions/changed-files was), every workflow using the floating tag executes the new code on the next run. SHA pins (@a1b2c3...) freeze the action to a specific reviewed commit. Yes, it makes upgrades harder. That is the point.

6. Treat OIDC trusted publishing as "better, not safe"

OIDC removes long-lived secrets from your repository. It does not remove the runner from the attack surface. Two consequences:

  • The workflow with id-token: write must be the smallest possible workflow. Build artifacts elsewhere, hand them to the publish workflow as inputs, and have the publish workflow do nothing except verify and publish. No third-party actions, no postinstall scripts, no untrusted code paths.
  • Add publish-time approvals where they fit. GitHub environments support required reviewers; for high-leverage publish jobs this turns a workflow compromise from "instant publish" into "publish only after a human clicks approve, with the diff visible". This would not have prevented the TanStack attack on its own (the attacker controlled the build, not the YAML) but it raises the bar for the next variant.

7. Constrain what runs at install time

The Shai-Hulud family of worms spreads via preinstall and postinstall scripts. The 84 malicious TanStack versions included a malicious optionalDependencies entry. Two defenses, both cheap:

  • --ignore-scripts by default in CI and on developer machines, with explicit overrides for packages you know need scripts.
  • Minimum package age policies. pnpm supports a minimumReleaseAge setting; npm has third-party equivalents. Setting "do not install versions less than 24 hours old" would have given the npm security team time to pull the TanStack tarballs before most consumers fetched them.

8. Monitor your own publishes

The TanStack postmortem's most painful line:

For a maintainer, the simplest version is a workflow that watches the npm registry for new publishes of packages in your scope and posts to Slack/Discord/email on every one. If you publish three times a month and you see a fourth event you did not trigger, you have minutes, not hours, to react. The compromised release window for TanStack was six minutes. Behavioral analysis at npm flagged all 84 versions within that window, but the team did not learn from npm directly.

9. Enable FIDO2 for maintainers, retire TOTP

GitHub is phasing out TOTP-based 2FA in favor of WebAuthn / FIDO2. Hardware-backed keys prevent the phishing routes that supplement the workflow-based attacks. This will not stop a pwn request, but it stops the human-attack vector that often combines with the technical one.6

10. Audit every existing pull_request_target workflow today

The single highest-value action after reading this post: grep your .github/workflows/ directory for pull_request_target and audit every match against this list. Most workflows that use it do not need to. Most that do need it do not need to check out the PR head. Most that check out the PR head do not need to run the contributor's code. Each layer you remove cuts the attack surface by an order of magnitude.

bash
# A starting query for any org
gh search code "pull_request_target" --owner YOUR_ORG --extension yml

What GitHub Changed, and What's Still Missing

GitHub shipped two pull_request_target hardening changes that took effect December 8, 2025:

  • The workflow source now always resolves to the default branch. Previously, a PR's base branch could supply the workflow definition, which created edge cases where attackers could nudge the base branch into running an older, more vulnerable version of the workflow.
  • GITHUB_REF resolves to the default branch and GITHUB_SHA to its latest commit. This closes a class of bugs where untrusted branch names could influence evaluation in scripts that referenced these vars.
  • Environment branch protection rules for pull_request_target evaluate against the default branch, not the PR head. This prevents PRs from sneaking past environment-scoped reviewer requirements by appearing to come from a protected branch.

These are good changes. They address real prior incidents. They do not address the central failure mode, which is that the trigger still exists, the documentation still buries the warnings, and a workflow author can still write four lines of YAML that hand fork-controlled code a publish-capable runner. The trigger's design is the bug; everything else is mitigation.

A small list of changes that would meaningfully reduce the risk:

  1. A new trigger, pull_request_external, that explicitly cannot check out PR head code. Same metadata access as pull_request_target, none of the dangerous primitives.
  2. A repo-level setting to disable pull_request_target entirely, defaulting to off for new repos.
  3. Per-workflow cache scopes that respect the trust boundary, so fork-triggered builds cannot poison caches restored by release pipelines.
  4. Mandatory delay between OIDC mint and use, with a short approval window. Five seconds is nothing for a legitimate publish and an eternity for a memory-scraping worm.

None of these are technically hard. The reason they have not shipped is the same reason pull_request_target is still the trigger of choice for benchmark workflows: every reasonable use case looks fine in isolation. The compromise only happens when the use case meets the runtime.

If You Take Away Five Things

  1. pull_request_target plus checkout of PR head is the entire bug class. Every major npm and PyPI ecosystem compromise of the last fourteen months chains through it, either directly (Nx, PostHog, Trivy, TanStack) or downstream of a victim that was (LiteLLM via Trivy). There is no advanced exploitation technique required.
  2. Permission flags are partial mitigation, not isolation. permissions: contents: read does not stop cache poisoning. It does not stop in-memory OIDC token scraping. Treat the runner process, not the GITHUB_TOKEN, as the trust boundary.
  3. Two workflows beat one clever workflow. Run untrusted code in a sandboxed pull_request workflow that writes artifacts. Run the elevated work in a separate workflow_run workflow that consumes them. The trust boundary becomes auditable.
  4. OIDC and SLSA are necessary, not sufficient. They verify the build, not the legitimacy of the code being built. A compromised workflow publishes packages with valid provenance, and every downstream verification check passes.
  5. Audit your pull_request_target workflows today. This is the action with the highest expected value of anything on the list. One grep and a careful read per match. Most matches in most repos do not need the trigger.

The pattern is going to keep recurring until the trigger is gone or until the default for fork-PR workflows changes. Until then, the maintainers who avoid the next compromise are the ones who already audited.


S
Written by
Sascha Becker
More articles