3 Pilgrim LLC | The Compute Efficiency Frontier | Version 1.0 · February 5,2026

The Compute Efficiency Frontier

A Companion Explainer

3 Pilgrim LLC

Version 1.0 · February 5,2026

Click here for full PDF of paper


1) Why This Paper Exists

Over the last several years, AI capability has improved sublinearly while cost has grown superlinearly. Larger clusters consume more power, generate more heat, move more bits across longer fabrics, and require increasingly elaborate coordination—yet benchmark gains continue to flatten.

Industry narratives initially treated this as a temporary engineering lag: better chips, denser interconnects, improved cooling, or more data would restore prior scaling slopes. Instead, each local improvement shifted pressure elsewhere in the system.

This paper argues that the observed flattening is not accidental, cyclical, or purely economic. It is the natural result of multiple physical and informational constraints coupling into a single limiting surface.

We call that surface the Compute Efficiency Frontier (CEF).

The CEF is not a wall you hit in one dimension. It is a multidimensional boundary beyond which marginal capability per unit of cost, power, data, or scale asymptotically approaches zero. Past this frontier, additional investment produces entropy, coordination loss, and stranded capital—not intelligence.


2) What the Paper Says (Plain Language)

Each constraint is rooted in a different physical or informational law. None alone explains the slowdown. Together, they define a convex efficiency boundary.


3) What Distinguishes This Framework


4) Theoretical Implications

(Assuming the Framework Is Correct)

This aligns directly with the earlier semiotic correction: most scaling today increases correlated capacity, not independent degrees of freedom.


5) Potential Implications

(Downstream, Not Predictions)

A) Strategy & Economics

B) Infrastructure & Operations

C) Data, Training, and Evaluation