Skip to main content
TokenCost logoTokenCost
ComparisonApril 11, 2026·7 min read

OpenAI's new $100 ChatGPT Pro: what you actually get on Codex, and when the API wins anyway

OpenAI added a new tier on April 9. It slots between Plus ($20) and the existing $200 Pro plan - which still exists, with the same name. Here's what the $100 plan actually includes, what it costs to run Codex through the API instead, and the math that tells you which to pick.

Person reviewing financial documents at a desk with a calculator and laptop

Photo by Scott Graham on Unsplash

OpenAI added a $100/month Pro tier on April 9 - not a price cut on the $200 plan, but a new slot below it priced to match Anthropic's Claude Max. For Codex power users, the decision is roughly: under 6 medium cloud tasks per day, the API comes out cheaper; above that, or if you want a predictable bill, the subscription wins. Business teams have a third option as of April 2 - pay-as-you-go Codex seats that bill on token consumption with no rate limits.

A new tier, not a price cut

A lot of the initial coverage got this wrong. OpenAI's $200/month Pro plan is not going away. The $100 plan is a new slot below it, with roughly half the Codex capacity and access to GPT-5.3-Codex-Spark (a research preview exclusive to this tier). If you were hoping OpenAI cut its top tier in half, that's not what happened.

The timing is about Anthropic. OpenAI launched the $100 tier at the same price as Anthropic's Claude Max subscription - the plan that lets Claude Code users run agents without thinking about per-session costs. Anthropic's coding tool reportedly crossed $2.5 billion in annualized revenue by February 2026. OpenAI has been watching that closely.

Codex usage has also grown fast enough to justify a new pricing tier. Sam Altman posted on April 7 that more than 3 million people use Codex weekly, up 5x in three months, with monthly growth above 70%. That's the demand signal behind this.

PlanPriceNotes
Free$0/monthLimited access, no Codex
Go$8/monthNo Codex, no reasoning models
Plus$20/monthBaseline Codex access
Pro (new)$100/month5x Codex vs Plus; 2x temporary boost through May 31 (effectively 10x Plus)
Pro (original)$200/month20x Codex vs Plus
Business$25/seat/month ($20 annual)Pay-as-you-go Codex seats available
EnterpriseCustomSLA, compliance, custom limits

Codex limits by plan

OpenAI doesn't publish fixed numbers like "200 tasks/day." The limits use rolling 5-hour windows and adjust with system load and which model variant the task runs on. These are the ranges OpenAI has communicated publicly.

PlanCloud tasks / 5 hrsLocal messages / 5 hrs
Plus10-60Varies
Pro $100 (standard)200-1,200600-3,000
Pro $100 (2x boost promo, through May 31)~10x Plus effective~10x Plus effective
Pro $20020x Plus20x Plus
Business (standard seats)5-40Varies
Business (pay-as-you-go Codex seats)No limitNo limit

Limits adjust with system load and model variant used. Ranges from OpenAI's Help Center and third-party reporting.

What Codex costs through the API

Running Codex via the API has two cost components: token pricing and container fees. They add up differently depending on how heavy your sessions are.

Token pricing

The main Codex API model is codex-mini-latest, a smaller variant tuned for speed. Based on available pricing sources, it runs at approximately $1.50 per million input tokens and $6 per million output tokens - though this has shifted as OpenAI has updated the model. Sessions that escalate to gpt-5.2-codex push output costs to around $14 per million.

Container fees

Cloud tasks run in sandboxed containers billed per 20-minute session at the full rate, regardless of how long the session actually runs. These started March 31 and apply to all Hosted Shell and Code Interpreter sessions.

Container sizeCost / 20-min sessionEffective hourly
1 GB$0.03$0.09/hr
4 GB$0.12$0.36/hr
64 GB$1.92$5.76/hr

The break-even math

A medium Codex cloud task - 4 GB container, 200K input tokens, 20K output tokens on codex-mini - works out to roughly:

  • Container (4 GB, 1 session): $0.12
  • Input tokens (200K × $1.50/M): $0.30
  • Output tokens (20K × $6/M): $0.12
  • Total per task: ~$0.54

At that rate, $100 covers about 185 tasks - roughly 6 per working day. Below that threshold, the API is cheaper. Above it, the subscription pays off.

The math shifts fast if sessions get heavier. A task that pulls in 1M tokens of repo context and escalates to gpt-5.2-codex output can hit $5-10 per session on the API alone. At that level, even 2-3 heavy sessions per day pushes past $100/month.

Task typeEst. API costBreak-even tasks/month
Light (1 GB container, short context)~$0.25~400 (~13/day)
Medium (4 GB container, 200K context)~$0.54~185 (~6/day)
Heavy (large context, gpt-5.2-codex output)$5-1010-20 (~1-2/day)

Estimates based on codex-mini-latest (~$1.50/$6 per million tokens) and published container rates. Actual costs vary with model escalation and session length.

The Business change that matters more than the headline

A week before the $100 announcement, OpenAI changed how Codex works for Business teams. Workspaces can now add pay-as-you-go Codex seats that bill purely on token consumption - no fixed per-seat price, no rate limits. Add a seat, a developer starts a session, workspace credits get charged at API rates.

The Business plan itself also dropped $5/seat/month, from $30 to $25 monthly or $25 to $20 on annual billing. For a 10-person team, that's $600/year. The pay-as-you-go Codex seats are separate from this - they're for developers who need to run heavy workloads without hitting subscription caps.

There's a limited promotion through April 30: $100 in workspace credits per new Codex seat activated, capped at $500 per workspace. If your team is evaluating whether to try it, this month is the cheapest entry point.

Who should use which option

The $100 subscription makes sense if you run Codex most of the day and want a predictable bill. That predictability has real value - especially for freelancers who don't want to think about per-task costs mid-session.

API billing makes more sense if you run a production pipeline with controlled volume, use multiple providers so you're not locked to ChatGPT, or your actual usage stays below 6 medium tasks per day. There's also no hard cap on the API side - subscriptions cut you off at the per-window limit, the API doesn't.

For teams, pay-as-you-go Business seats are probably the most flexible option right now. No rate limits, no subscription tier to outgrow. The downside is unpredictable billing, which matters if you're managing a budget for multiple developers.

The $200 Pro plan still exists and still offers 20x Codex vs Plus. If you were already on that plan, the $100 tier doesn't change much for you. It's aimed at people who wanted more than Plus but couldn't justify $200.

Related

Compare Codex and Claude API costs

See how OpenAI's Codex models stack up against Claude Code, Gemini, and 300+ other models on a per-token basis.

Sources