WeSearch

The AI Rug Pull

·7 min read · 0 reactions · 0 comments · 1 view

Frontier AI is sold at a structural loss because users are still teaching the models. Three predictions for what happens when the apprenticeship ends — and who gets locked out of the workshop afterward.

Original article
Warman
Read full at Warman →
Full article excerpt tap to expand

Essay · AIThe ApprenticeshipFrontier AI is sold at a structural loss because users are still teaching the models. Three predictions for what happens when the apprenticeship ends — and who gets locked out of the workshop afterward.By Shaun Warman·Monday, April 27, 2026·9 min readTL;DR — TakeawaysFrontier AI is sold at a 4–7x loss per user because the human is the training set, not the customer.Three forces — synthetic data, agentic self-play, and saturating returns to RLHF — are closing the apprenticeship window in three to five years.When it closes, expect the $20 tier to vanish, top capabilities to gate behind enterprise contracts, and the labs themselves to step in as operators.Enterprises and owners of open-weight model capacity survive cleanly. Casual consumers and small operators built on subsidized inference get rug-pulled.Build for portability now: closed models for paid work, open-weight fallback in the architecture, and zero mission-critical workflows on a free tier.A subscription to a frontier AI assistant runs about $20 a month for an individual and $30 a month for a pro tier with extended limits. The actual cost of serving a heavy user — measured in GPU-hours, electricity, model wear, and the orchestration layer that sits underneath the chat box — is several multiples higher. OpenAI has admitted publicly that even its $200-a-month enterprise plan loses money on the most active users. Anthropic's economics are no better. Google subsidizes Gemini against ad revenue. Meta gives Llama away outright.This is not normal. Software-as-a-service companies do not run negative gross margins on flagship products. Cloud providers do not lease GPUs below cost. Pricing this anomalous is always information, and the information is this: the user is not yet the customer. The user is the training set.The MechanicEvery interaction with a frontier model produces signal. Edits, regenerations, thumbs-up and thumbs-down, the questions that get asked at all, the questions that don't get asked, the way a user phrases a follow-up when the first answer falls short — every one of those is data. Aggregated across hundreds of millions of users and tens of billions of conversations, that data is the moat.It is also the part the model cannot synthesize for itself. Base capabilities — reasoning, language, code generation — have become roughly fungible across providers. What separates a great model from a good one in 2026 is reinforcement learning from human feedback, and the human in that loop is paying $20 a month for the privilege of being part of the training run.The NumbersEstimate the subsidy. A serious individual user of a frontier model burns through $80 to $150 of compute per month at sticker prices. At $20 of revenue, that is a four-to-seven-times loss per user, every month. Multiply across tens of millions of paying subscribers and hundreds of millions of free-tier users, and the actual scale comes into view: tens of billions of dollars per year in deliberate subsidy, financed by venture capital and balance-sheet equity at the major labs.Investors tolerate this because the return on the data is not yet on the income statement. It is on the model. Every quarter the model gets better in ways that compound. Every iteration of better model lowers the marginal cost of the next iteration. The subsidy is paying for a flywheel — and like every flywheel, the moment it stops needing fuel, the fuel stops being free.The user is not yet the customer. The user…

This excerpt is published under fair use for community discussion. Read the full article at Warman.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Email

Discussion

0 comments

More from Warman