In partnership with

Anthropic is building the opposite business model from OpenAI. They make AI models that enterprises pay to use through APIs instead of trying to get millions of consumers to subscribe. This matters because 80% of their revenue comes from companies signing multi-year contracts to plug Claude into their workflows. The company went from $1 billion in annual revenue at the start of 2025 to $5 billion by August. They are projecting $9 billion by year end and targeting $20 billion to $26 billion in 2026. More importantly, they expect to stop burning cash in 2027 and hit break even in 2028. OpenAI won't be profitable until 2030 and will burn through $115 billion getting there. The path to an IPO depends on whether Anthropic can keep gross margins climbing from 50% today to 77% by 2028 while revenue growth stays steep enough to justify a $300 billion to $400 billion valuation in their next funding round.

I.

Only a handful of companies can build frontier AI models. It costs billions to train these systems and billions more to run them. Anthropic started in 2021 when a group of researchers left OpenAI over disagreements about how fast to push AI development. They focused on safety and raised money from everyone who mattered. Amazon put in $8 billion total. Google invested $3.5 billion and signed a massive cloud computing deal. The company has raised $37.3 billion across 16 funding rounds and currently sits at a $183 billion valuation.

What separated Anthropic from competitors early on was the decision to skip the consumer game entirely. ChatGPT exploded to 800 million weekly users by going direct to consumers. Anthropic went the other direction and built for enterprises from day one. Their API business doubled OpenAI's API revenue in 2025 even though almost nobody outside of tech companies had heard of Claude. The revenue model runs on tokens. Every time someone uses Claude through an API, they pay per million input tokens and per million output tokens processed. Claude Sonnet 4.5, their main product, costs $3 per million input tokens and $15 per million output tokens. These prices are higher than OpenAI charges, but enterprise customers pay them because Claude performs better on coding tasks and long document analysis.

The company crossed 300,000 business customers by late 2025. Over 60% of those customers use more than one Claude product. That land and expand motion drives the revenue growth. A developer starts using the API with a credit card, scales usage as they build their application, and eventually the company signs an enterprise contract when usage hits a certain threshold. No lengthy enterprise sales cycles. No armies of account executives. The product sells itself through performance.

Stop Drowning In AI Information Overload

Your inbox is flooded with newsletters. Your feed is chaos. Somewhere in that noise are the insights that could transform your work—but who has time to find them?

The Deep View solves this. We read everything, analyze what matters, and deliver only the intelligence you need. No duplicate stories, no filler content, no wasted time. Just the essential AI developments that impact your industry, explained clearly and concisely.

Replace hours of scattered reading with five focused minutes. While others scramble to keep up, you'll stay ahead of developments that matter. 600,000+ professionals at top companies have already made this switch.

II.

Revenue breaks down into 3 buckets. API calls make up 70% to 75% of total revenue. These are pure usage based charges where enterprises pay for every token they process. Consumer and team subscriptions account for 10% to 15%. That includes Claude Pro at $20 per month, Claude Max at $100 to $200 per month, and Claude Team at $30 per user per month. The rest comes from large enterprise contracts with custom pricing, private cloud deployments, and professional services.

The API model creates automatic revenue scaling. When a customer's product launches and usage spikes 10x overnight, Anthropic's revenue from that customer goes up 10x with zero additional sales effort. This is different from traditional software where you sell seats and then fight for expansion deals. Token based pricing means revenue tracks directly with customer success. If their application grows, Anthropic gets paid more. If usage drops, revenue falls automatically.

Gross margins are the critical constraint. Running inference on these models costs real money. Every query burns GPU cycles and electricity. In 2024, Anthropic had negative gross margins between 94% and 109%. They were losing more on infrastructure costs than they made in revenue. By 2025, gross margins improved to around 50%. Internal projections show margins climbing to 60% by 2027 and 77% by 2028. Getting there requires 3 things. First, inference costs need to keep falling as newer chips get more efficient. Second, they need to route queries intelligently so simple tasks run on cheaper models while complex work uses expensive compute. Third, pricing needs to stay high enough that revenue per token grows faster than cost per token.

The capital structure is circular in a way that looks strange but makes sense. Amazon and Google are both major investors and major customers. Anthropic raised billions from them and then committed to spend much of that money back on their cloud infrastructure. Amazon Web Services is the primary training partner. Google Cloud signed a deal to provide 1 million AI chips starting in 2026, worth tens of billions and delivering over a gigawatt of computing power. This arrangement gives Anthropic guaranteed access to compute capacity without building their own data centers. It also means the cash raised in funding rounds flows right back out to pay for infrastructure.

Distribution runs through all the major platforms. Claude is available on AWS Bedrock, Google Vertex AI, and Microsoft's products. In September 2025, Microsoft started integrating Claude into Office 365 and Copilot, reaching over 100 million users. Salesforce expanded their Claude partnership. Deloitte and Cognizant are rolling Claude out to hundreds of thousands of employees. These distribution deals reduce customer acquisition costs to nearly zero. The product sits inside tools enterprises already use. Adoption happens through workflow integration rather than marketing spend.

III.

The market prices Anthropic at 39x estimated 2025 revenue. That multiple only works if you believe they capture a large share of a multi trillion dollar AI market and reach profitability on schedule. The tension sits in 3 places.

First, revenue quality matters more than revenue size. Anthropic is growing faster than almost any company in history, but 70% of that revenue depends on API usage that could shift quickly if competitors drop prices or release better models. Enterprise contracts provide some stickiness because companies integrate Claude deeply into workflows and switching costs are high. But the contracts often have performance guarantees and customers can renegotiate or walk if the model falls behind. Revenue is recurring only as long as the models stay competitive.

Second, the path to profitability assumes gross margins keep climbing while revenue accelerates. Right now they are burning $3 billion in cash annually, down from $5.6 billion last year. The cash burn as a percentage of revenue is falling, which is good. By 2027 they expect to stop burning cash entirely. By 2028 they project break even and $17 billion in cash flow on $70 billion in revenue. Those numbers require everything to go right. Compute costs need to fall on schedule. Pricing power needs to hold. Revenue growth cannot slow. Customer acquisition costs must stay low. If any of those assumptions break, the timeline stretches and cash requirements go up.

Third, the circular capital structure creates questions about true cash generation. When Amazon invests $8 billion and Anthropic commits to spend most of that back on AWS infrastructure, how much of the revenue growth is real customer demand versus internal transfers between related parties? Google owns 10% of the company and is also a major compute provider. Microsoft is both a partner and a potential competitor. These relationships provide scale and distribution but they also mean a large portion of revenue and spending stays inside a closed loop.

IV.

IPO timing depends entirely on the profitability timeline. No AI company has gone public yet because none of them make money. Anthropic breaking even in 2028 would make them the first frontier AI lab to hit profitability and the obvious candidate for an IPO. That timing also matters because the public markets in 2028 or 2029 will have very different tolerance for AI companies than today. If the technology delivers on its promises and enterprises are seeing real productivity gains, valuations could go higher. If AI turns out to be overhyped and return on investment disappoints, the IPO window might close entirely.

The enterprise focus provides downside protection that consumer AI lacks. OpenAI's 800 million weekly users are impressive but most of them pay nothing or pay $20 per month. Enterprise customers pay thousands or millions per month and sign contracts that lock in revenue for years. If consumer interest in AI chatbots fades, OpenAI's growth slows immediately. If one of Anthropic's enterprise customers cuts usage, it barely moves the needle because revenue is diversified across 300,000 businesses.

Competition will intensify before any IPO. Google, Amazon, and Microsoft are all building their own models while also investing in Anthropic. That conflict creates risk. If Google's Gemini or Amazon's models get good enough, they might shift internal usage away from Claude to keep more revenue in house. Anthropic's pricing power exists because their models perform better on specific tasks like coding. That advantage is temporary and needs constant reinforcement through research and product improvement. Falling behind technically even once could trigger customer churn that craters revenue growth.

Regulatory risk is real but not priced in. Anthropic settled a copyright lawsuit for $1.5 billion in September 2025. That is a one time cost but it signals that legal expenses will be ongoing as governments figure out how to regulate AI training data, model outputs, and liability for mistakes. Europe's AI Act and evolving US oversight will add compliance costs and potentially limit how these models can be used. For a company projecting $70 billion in revenue by 2028, regulatory friction could easily shave a few billion off the top line or force margin compression to cover compliance spending.

V.

The margin expansion story only works if inference costs keep falling. Right now, running these models is expensive enough that gross margins sit around 50%. Getting to 77% by 2028 requires compute to get dramatically cheaper or revenue per token to grow much faster. If chip improvements slow down or if energy costs rise, the margin trajectory flattens and profitability gets pushed out. Anthropic does not control the semiconductor supply chain or the pace of hardware innovation. They are betting on continued exponential improvement in cost per FLOP and energy efficiency. That bet has worked for decades but it is not guaranteed to continue.

Revenue growth at the projected pace requires enterprises to keep spending heavily on AI. The $20 billion to $26 billion revenue target for 2026 assumes corporate AI budgets double or triple from current levels. If economic conditions tighten or if early AI projects fail to deliver returns, that spending could slow. Enterprises will cut experimental AI projects before they cut core systems. Anthropic needs to move from nice to have to mission critical before a downturn hits or revenue growth will stall.

The competitive moat is not structural. Anthropic's advantage today comes from having better models for certain tasks. Training better models requires money, talent, and time, but it does not require anything that competitors cannot replicate. OpenAI, Google, Meta, and well funded startups are all chasing the same performance benchmarks. If any of them leapfrogs Claude on coding or reasoning tasks, enterprise customers will switch. The switching costs are real but not insurmountable. APIs are standardized enough that moving from one provider to another takes weeks not years.

Valuation risk is the biggest constraint for an IPO. At $183 billion today and potentially $300 billion to $400 billion in the next funding round, Anthropic would be one of the largest tech IPOs ever. Public market investors are more skeptical than venture investors. They will demand proof that $70 billion in revenue and $17 billion in cash flow are achievable, not just projected. They will want to see at least 4 quarters of profitability before pricing the company for sustained earnings growth. If Anthropic tries to go public before hitting profitability, the valuation will compress significantly from private market levels. Waiting until 2029 after a year of profitable operations makes more sense than rushing in 2027 while still burning cash.

VI.

Anthropic built a business designed to reach profitability before competitors. The enterprise API model generates higher revenue per customer, longer contract duration, and lower customer acquisition costs than consumer subscription models. Gross margins are climbing toward software company levels as inference costs fall and pricing holds. Cash burn is declining and the company projects break even in 2028, 2 years before OpenAI.

The IPO framework depends on executing that timeline. If Anthropic hits profitability in 2028 and revenue reaches $30 billion to $40 billion with 60% to 70% gross margins, the public markets will pay a premium multiple for a rare combination of growth and profitability in AI infrastructure. The company would be the first frontier AI lab to prove the unit economics work at scale.

The constraints are execution risk and competitive pressure. Margins only improve if compute costs keep falling and pricing power holds. Revenue only scales if enterprises keep increasing AI spending and Claude stays technically competitive. The circular capital structure with Amazon, Google, and Microsoft provides advantages today but creates conflicts as those companies build competing products.

An IPO in late 2028 or 2029 makes sense if the profitability timeline holds. Earlier than that and the company is still burning cash with unproven unit economics. Later than that and competitors might have caught up or the market window might have closed. The next 3 years determine whether Anthropic becomes a $500 billion public company or gets acquired by one of its investors before reaching the public markets.

Learn AI in 5 minutes a day

This is the easiest way for a busy person wanting to learn AI in as little time as possible:

  1. Sign up for The Rundown AI newsletter

  2. They send you 5-minute email updates on the latest AI news and how to use it

  3. You learn how to become 2x more productive by leveraging AI

This analysis is for educational purposes. It does not constitute investment advice or a recommendation to buy or sell any security. Investors should conduct their own due diligence and consult financial advisors.

Keep Reading