OpenAI dropped the price of o3 by 80%

(twitter.com)

Comments

34679 10 June 2025
I'd like to offer a cautionary tale that involves my experience after seeing this post.

First, I tried enabling o3 via OpenRouter since I have credits with them already. I was met with the following:

"OpenAI requires bringing your own API key to use o3 over the API. Set up here: https://openrouter.ai/settings/integrations"

So I decided I would buy some API credits with my OpenAI account. I ponied up $20 and started Aider with my new API key set and o3 as the model. I get the following after sending a request:

"litellm.NotFoundError: OpenAIException - Your organization must be verified to use the model `o3`. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate."

At that point, the frustration was beginning to creep in. I returned to OpenAI and clicked on "Verify Organization". It turns out, "Verify Organization" actually means "Verify Personal Identity With Third Party" because I was given the following:

"To verify this organization, you’ll need to complete an identity check using our partner Persona."

Sigh I click "Start ID Check" and it opens a new tab for their "partner" Persona. The initial fine print says:

"By filling the checkbox below, you consent to Persona, OpenAI’s vendor, collecting, using, and utilizing its service providers to process your biometric information to verify your identity, identify fraud, and conduct quality assurance for Persona’s platform in accordance with its Privacy Policy and OpenAI’s privacy policy. Your biometric information will be stored for no more than 1 year."

OK, so now, we've gone from "I guess I'll give OpenAI a few bucks for API access" to "I need to verify my organization" to "There's no way in hell I'm agreeing to provide biometric data to a 3rd party I've never heard of that's a 'partner' of the largest AI company and Worldcoin founder. How do I get my $20 back?"

sschueller 10 June 2025
Has anyone noticed that OpenAI has become "lazy"? When I ask questions now it will not give me a complete file or fix. Instead it tells me what I should do and I need to ask a second or third time to just do the thing I asked.

I don't see this happening with for example deepseek.

Is it possible they are saving on resources by having it answer that way?

mythz 11 June 2025
I've been turned off with OpenAI and have been actively avoiding using any of their models for a while, luckily this is easy to do given the quality of Sonnet 4 / Gemini Pro 2.5.

Although I've always wondered how OpenAI could get away with o3's astronomical pricing, what does o3 do better than any other model to justify their premium cost?

lvl155 10 June 2025
Google has been catching up. Funny how fast this space is evolving. Just a few months ago, it was all about DeepSeek.
behnamoh 10 June 2025
how do we know it's not a quantized version of o3? what's stopping these firms from announcing the full model to perform well on the benchmarks and then gradually quantizing it (first at Q8 so no one notices, then Q6, then Q4, ...).

I have a suspicion that's how they were able to get gpt-4-turbo so fast. In practice, I found it inferior to the original GPT-4 but the company probably benchmaxxed the hell out of the turbo and 4o versions so even though they were worse models, users found them more pleasing.

BeetleB 10 June 2025
Why does OpenAI require me to verify my "organization" (which requires my state issued ID) to use o3?
lxgr 10 June 2025
Is there also a corresponding increase in weekly messages for ChatGPT Plus users with o3?

In my experience, o4-mini and o4-mini-high are far behind o3 in utility, but since I’m rate-limited for the latter, I end up primarily using the former, which has kind of reinforced the perception that OpenAI’s thinking models are behind the competition altogether.

mrcwinn 11 June 2025
Only at HN can the reaction to an 80% price drop be a wall of criticism.
coffeecoders 10 June 2025
Despite the popular take that LLMs have no moat and are burning cash, I find OpenAI's situation really promising.

Just yesterday, they reported an annualized revenue run rate of 10B. Their last funding round in March valued them at 300B. Despite losing 5B last year, they are growing really fast - 30x revenue with over 500M active users.

It reminds me a lot of Uber in its earlier years—fast growth, heavy investment, but edging closer to profitability.

blueblisters 10 June 2025
This is the best model out there, priced level or lesser than Claude and Gemini

They’re not letting the competition breathe

JojoFatsani 10 June 2025
O3 is really good. I haven’t had the same results with o4 unfortunately
ramesh31 10 June 2025
Anthropic will need to follow suit with Opus soon. It is simply too expensive for anything by an order of magnitude.
seydor 10 June 2025
when the race to the bottom reaches the bottom, the foundation model companies will be bought by ... energy companies. You 'll be paying for AI with your electricity bill
ucha 11 June 2025
Can we know for sure that the price drop is accompanied by a change in the model such as quantization?

On twitter, some people say that some models perform better at night when there is a less demand which allows them to serve a non-quantized model.

Since the models are only available through API and there is no test to check which version of the model is served, it's hard to know what we're buying...

ninetyninenine 10 June 2025
You know. because LLMs can only be built by corporations... but because they're so easy to build, I see the price going down massively thanks to competition. Consumers benefit because all the companies are trying to out run each other.
ilaksh 10 June 2025
Maybe because they also are releasing o3-pro.
OutOfHere 10 June 2025
o3 is very much needed in VSCode GitHub CoPilot for Ask/Edit/Agent modes. It is sorely missing there.
monster_truck 11 June 2025
Curious that the number of usages for plus users remained the same. I don't think they're actually doing anything material to lower the cost by a meaningful amount. It's just margin they've always had, and they cut it because magistral is pretty incredible for being completely free
sagarpatil 11 June 2025
Meanwhile Apple: Liquid Glass
stevev 11 June 2025
It was only a matter of time considering Deepseek R1’s recent release. OpenAI’s competitor is an open-source product that offers similar quality at a tenth of the cost. Now they’re just trying to prevent customers from leaving.
visiondude 10 June 2025
always seemed to me that efficient caching strategies could greatly reduce costs… wonder if they cooked up something new
MallocVoidstar 10 June 2025
Note that they have not actually dropped the price yet: https://x.com/OpenAIDevs/status/1932463601119637532

> We’ll post to @openaidevs once the new pricing is in full effect. In $10… 9… 8…

There is also speculation that they are only dropping the input price, not the output price (which includes the reasoning tokens).

teaearlgraycold 10 June 2025
Personally I've found these bigger models (o3/Claude 4 Opus) to be disappointing for coding.
nikcub 10 June 2025
fyi the price drop has been updated in Cursor:

https://x.com/cursor_ai/status/1932484008816050492

minimaxir 10 June 2025
...how? I'd understand a 20-30% price drop from infra improvements for a model as-is, but 80%?

I wonder if "we quantized it lol" would classify as false advertising for modern LLMs.

alliao 10 June 2025
it used to take decades of erosion to make google search a hot mess, now that everything's happening in light speed, we get days for AI models to decay to the point of hot mess again..
candiddevmike 10 June 2025
It's going to be a race to the bottom, they have no moat.
maxcomperatore 11 June 2025
groq is better
biophysboy 10 June 2025
I don't know if this is OpenAI's intention, but the little message "you've reached your usage limit!" is actively disincentivizing me from subscribing. For my purposes, the free model is more than good enough; the difference before and after is negligible. I honestly wouldn't pay a dollar.

That said, I'm absolutely willing to hear people out on "value-adds" I am missing out on; I'm not a knee-jerk hater (For context, I work with large, complex & private databases/platforms, so its not really possible for me to do anything but ask for scripting suggestions).

Also, I am 100% expecting a sad day when I'll be forced to subscribe, unless I want to read dick pill ads shoehorned in to the answers (looking at you, YouTube). I do worry about getting dependent on this tool and watching it become enshittified.

boyka 11 June 2025
80%? So this is either same Trump style "art of the deal" with setting unreasonable pricing in the first place or desperately needing customers?
unraveller 10 June 2025
I have no moat and I must make these GPUs scream.
godelski 10 June 2025
For those wondering

  Yesterday:               Today
  -------------           -------------
  Price                   Price
  Input:                  Input:
  $10.00 / 1M tokens      $2.00 / 1M tokens
  Cached input:           Cached input:
  $2.50 / 1M tokens       $0.50 / 1M tokens
  Output:                 Output:
  $40.00 / 1M tokens      $8.00 / 1M tokens
https://archive.is/20250610154009/https://openai.com/api/pri...

https://openai.com/api/pricing/

koakuma-chan 10 June 2025
OpenAI dropped the price by so much that the server also went down.
polskibus 10 June 2025
Is this a reaction to Apple paper showing that reasoning models don’t really reason?
madebywelch 10 June 2025
They could drop the price 100% and I still wouldn't use it, so long as they're retaining my data.