The historical bottleneck
In the late ’90s, the web’s problem wasn’t imagination, it was money. Contemporaries said it at the time: Bill Gates argued in 1996 that the internet would only thrive once content actually got paid. Jakob Nielsen wrote in 1997 that only a vanishing fraction of sites could survive on ads. WIRED reported record ad quarters that still weren’t “enough to feed the media masses,” and the Journal of Electronic Publishing concluded in 1998 that banners and CPMs simply “weren’t adding up,” leaving ad-dependent sites in the red.
The limiting factor wasn’t HTML. It was monetization rails and pricing that could fund the long tail.
How sludge happens: paying the wrong proxy
We mistook keywords for intent and paid for the tokens they cheaply produced: impressions and clicks. When it’s cheaper to fake a KPI than to earn it, the market will mint the KPI: pop-ups, MFA sites, dark patterns, and empty “engagement.”
That was an objective function failure, not a moral one. We optimized for the wrong thing.
Nielsen’s framing of the web as a “customer-dominated medium” still holds. People are there to get something done. If you price the proxy instead of the purpose, quality collapses into sludge.
AI’s parallel: the unit economics of cognition
Swap “websites” for “agents.” Models are strong. Protocols like MCP/AdCP wire data and tools. But deep reasoning at impression time is still expensive enough to erode margins.
Nick Ross’s recent analysis is the pragmatic line: use agents where the computation-to-benefit ratio is favorable (setup, ops, reporting). Be cautious about high-frequency, deep analysis until prices fall further.
Put simply: for any ad opportunity, the value of reasoning (probability of a verified action × margin) must exceed the cost of reasoning (tokens + tool calls) and any retention penalty. Run cognition only where expected value beats total cost. Prefer intent peaks (conversion-time “I need to…” moments) over generic impressions.
The retention constraint (and why echo matters)
Even tiny incremental churn destroys ARPU. If ΔChurn is the exposure-driven lift in churn and LTV is lifetime value, then the ad must earn at least ΔChurn × LTV in expectation just to be neutrality-safe.
That condition is the hinge of the entire model. It’s exactly where echo’s rail is essential: selecting for in-context helpfulness and verifying outcomes so ΔChurn is ~0 or negative.
Without that posture, expected value collapses under churn costs. You regress to 1997 economics, no matter how clever the model or protocol. Everything else here can be vendor-agnostic. This constraint cannot.
Cost curves and the way forward
The early web only bloomed when falling compute and bandwidth costs met rails and measurement that could harvest those declines. The record shows both the scarcity and the turn.
AI will rhyme. Inference costs fall. Model quality rises. The practical path is disciplined:
- Buy cognition where intent is explicit
- Price on verified outcomes
- Obey the retention constraint
- Widen the surface as costs fall
That’s how advertising funds intelligence without recreating the sludge equilibrium of the past.
References:
- Gates, B. (1996, January 3). Content is King. Microsoft.
- Nielsen, J. (1997, August 31). Why Advertising Doesn’t Work on the Web. Nielsen Norman Group.
- Wilson, D. L. (1998, September). Web Ads Aren’t Adding Up. Journal of Electronic Publishing, 4(1).
Web Ad Sales up, but Not Enough. (1997, June 13). Wired.
Ross, N. (2025, October 27). AdCP And The Math Of Agentic AI: Building For Today’s Economics, Not Tomorrow’s Dreams. AdExchanger.