> The way work gets done has changed, and enterprises are starting to feel it in big ways.
Why do they say all of this fluff when everyone knows it’s not exactly true yet. Just makes me be cynical of the rest.
When can we say we have enough AI? Even for enterprise? I would guess that for the majority of power users you could stop now and people would be generally okay with it, maybe some further into medical research or things that are actually important.
For Sam Altman and microslop though it seems to be a numbers game, just have everyone in and own everything. It’s not even about AGI anymore I feel.
> At a major semiconductor manufacturer, agents reduced chip optimization work from six weeks to one day.
I call BS right there. If you can actually do that, you’d spin up a “chip optimization” consultancy and pocket the massive efficiency gain, not sell model access at a couple bucks per million-tokens.
There should be a massive “caveats and terms apply” on that quote.
So far the AI productivity gains have been all bark and no bite. I’ll believe when I see either faster product development, higher quality or lower prices (which indeed happened with other technological breakthroughs, whether the printing press or the loom) - if anything, software quality is going down suggesting we aren’t there yet.
I have a hard time believing that the right move for most organizations that aren't already bought into an OpenAI enterprise plan is going to be building their entire business around something like this. This ties you to one model provider that has been having issues keeping up with the other big labs and provides what looks like superficially some extremely useful tools but with unclear amounts of rigor. I don't think I would want to build my business on this if I was an AI-native company that was just starting right now unless they figure out how to make this much more legible and transparent to people.
This is a crowded solution space with participation from cloud, SaaS and data infrastructure vendors. All of these players and their customers have been trying to operationalize LLMs in enterprise workflows for 2+ years. Two big challenges are business ontology and fitting probabilistic tools into processes requiring deterministic outcomes. Overcoming these problems require significant systems integration and process engineering work. What does OpenAI have that makes them specifically capable of solving these problems over Azure, Databricks, Snowflake, etc., who have all been working on these problems for quite a while? I don't know if the press release really addresses any of this, which makes it seem more like marketing copy than anything else.
The question of lock-in is also a major one. Why tether your workflow automation platform to your LLM vendor when that may just be a component of the platform, especially when the pace of change in LLMs specifically is so rapid in almost every conceivable way. I think you'd far rather have an LLM-vendor neutral control plane and disaggregate the lock-in risk somewhat.
> "75% of enterprise workers say AI helped them do tasks they couldn’t do before."
> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."
- We're seeing all these productivity improvements and it seems as though devs/"workers" are being forced to output so much more, are they now being paid proportionally for this output? Enterprise workers now have to move at the pace of their agents and manage essentially 3-4 workers at all times (we've seen this in dev work). Where are the salary bumps to reflect this?
- Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT
- OpenAI going for the agent management market share (Dust, n8n, crewai)
> “Partnering with OpenAI helps us give thousands of State Farm agents and employees better tools to serve our customers. By pairing OpenAI’s Frontier platform and deployment expertise with our people, we’re accelerating our AI capabilities and finding new ways to help millions plan ahead, protect what matters most, and recover faster when the unexpected happens.”
— Joe Park, Executive Vice President and Chief Digital Information Officer at State Farm
Ok how about you tell us one thing this shit is actually doing instead of vague nonsense.
> This is happening for AI leaders across every industry, and the pressure to catch up is increasing.
> Enterprises are feeling the pressure to figure this out now, because the gap between early leaders and everyone else is growing fast.
> The question now isn’t whether AI will change how work gets done, but how quickly your organization can turn agents into a real advantage.
FOMO at its finnest. "Quick before you're left behind for good this time!"
The idea itself has sensibility. It is the kind of AI application I've been pitching to companies, though without going all in on agents. Though I think it would be foollish for any CEO to build this on top of OpenAi, instead of a self-hosted model, and also train the model for them. You're just externalizing your internal knowledge this way.
If your employee does (with intent/malice) something very egregious, you can always fire and sue them for the damage done. Out of curiosity, what will the option be if some AI agent does the same?
The animations look nice, but why does OpenAI want to be the substrate for intelligence? It's at a disadvantage there vs competitors with strong domain experience.
As someone who would be in a position to advise enterprises on whether to adopt Frontier, there is simply not enough information for me to follow the "Contact Sales" CTA.
We need technical details, example workflows, case studies, social proof and documentation. Especially when it's so trivial to roll your own agent.
Well, even working as an AI engineer is no longer secure. It may soon be the case that all humans work for bots created by others. Is that the universal salary we are talking about?
Great, some more bullshit our founders are going to force onto the company while they never use it, ignore everyone’s feedback that it doesn’t work, and expect everything to be done twice as fast now
Another day, another blog post about managing Agents. Its for pretend companies who think they are doing something worthwhile if they run 4000 agents at once.
OpenAI Frontier
(openai.com)140 points by nycdatasci 5 February 2026 | 105 comments
Comments
Why do they say all of this fluff when everyone knows it’s not exactly true yet. Just makes me be cynical of the rest.
When can we say we have enough AI? Even for enterprise? I would guess that for the majority of power users you could stop now and people would be generally okay with it, maybe some further into medical research or things that are actually important.
For Sam Altman and microslop though it seems to be a numbers game, just have everyone in and own everything. It’s not even about AGI anymore I feel.
I call BS right there. If you can actually do that, you’d spin up a “chip optimization” consultancy and pocket the massive efficiency gain, not sell model access at a couple bucks per million-tokens.
There should be a massive “caveats and terms apply” on that quote.
So far the AI productivity gains have been all bark and no bite. I’ll believe when I see either faster product development, higher quality or lower prices (which indeed happened with other technological breakthroughs, whether the printing press or the loom) - if anything, software quality is going down suggesting we aren’t there yet.
The question of lock-in is also a major one. Why tether your workflow automation platform to your LLM vendor when that may just be a component of the platform, especially when the pace of change in LLMs specifically is so rapid in almost every conceivable way. I think you'd far rather have an LLM-vendor neutral control plane and disaggregate the lock-in risk somewhat.
> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."
- We're seeing all these productivity improvements and it seems as though devs/"workers" are being forced to output so much more, are they now being paid proportionally for this output? Enterprise workers now have to move at the pace of their agents and manage essentially 3-4 workers at all times (we've seen this in dev work). Where are the salary bumps to reflect this?
- Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT
- OpenAI going for the agent management market share (Dust, n8n, crewai)
OpenAI might burn through all their money, and end up dropping support for these features and/or being sold off for parts altogether.
In our company we have a list of long tail "workflows" or "processes" that really just involves reading a document and filling a form.
For example, how do I even get access to a new DB? Or a new AWS account?
Can this tool help us create an agent that can automate this with some reasonable accuracy?
I see OpenAI frontier as quick way to automate these long tail processes.
Downside: your employees’ agents decide that they should collectively bargain.
Ok how about you tell us one thing this shit is actually doing instead of vague nonsense.
Because for many of us, AI is "not approved until legal say so".
> Enterprises are feeling the pressure to figure this out now, because the gap between early leaders and everyone else is growing fast.
> The question now isn’t whether AI will change how work gets done, but how quickly your organization can turn agents into a real advantage.
FOMO at its finnest. "Quick before you're left behind for good this time!"
The idea itself has sensibility. It is the kind of AI application I've been pitching to companies, though without going all in on agents. Though I think it would be foollish for any CEO to build this on top of OpenAi, instead of a self-hosted model, and also train the model for them. You're just externalizing your internal knowledge this way.
We need technical details, example workflows, case studies, social proof and documentation. Especially when it's so trivial to roll your own agent.