Reflections on OpenAI

(calv.info)

Comments

hinterlands 15 July 2025
It is fairly rare to see an ex-employee put a positive spin on their work experience.

I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.

Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.

This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

humbleferret 15 July 2025
What a great post.

Some points that stood out to me:

- Progress is iterative and driven by a seemingly bottom up, meritocratic approach. Not a top down master plan. Essentially, good ideas can come from anywhere and leaders are promoted based on execution and quality of ideas, not political skill.

- People seem empowered to build things without asking permission there, which seems like it leads to multiple parallel projects with the promising ones gaining resources.

- People there have good intentions. Despite public criticism, they are genuinely trying to do the right thing and navigate the immense responsibility they hold.

- Product is deeply influenced by public sentiment, or more bluntly, the company "runs on twitter vibes."

- The sheer cost of GPUs changes everything. It is the single factor shaping financial and engineering priorities. The expense for computing power is so immense that it makes almost every other infrastructure cost a "rounding error."

- I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA), with each organisation's unique culture shaping its approach to AGI.

lz400 15 July 2025
Engineers thinking they're building god is such a good marketing strategy. I can't overstate it. It's even difficult to be rational about it. I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI. But this idea is sort of half-immune to criticism or skepticism: you can always respond with "but what if it's true?". The stakes are so high that the potentially infinite payoff snowballs over any probabilities. 0.00001% multiplied by infinite is an infinite EV so you have to treat it like that. Best marketing, it writes itself.
bhl 15 July 2025
> The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.

There's so much compression / time-dilation in the industry: large projects are pushed out and released in weeks; careers are made in months.

Worried about how sustainable this is for its people, given the risk of burnout.

a_bonobo 16 July 2025
>Thanks to this bottoms-up culture, OpenAI is also very meritocratic. Historically, leaders in the company are promoted primarily based upon their ability to have good ideas and then execute upon them. Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering. That matters less at OpenAI then it might at other companies. The best ideas do tend to win. 2

This sets off my red flags: companies that say they are meritocratic, flat etc., often have invisible structures that favor the majority. Valve Corp is a famous example for that where this leads to many problems, see https://www.pcgamer.com/valves-unusual-corporate-structure-c...

>It sounds like a wonderful place to work, free from hierarchy and bureaucracy. However, according to a new video by People Make Games (a channel dedicated to investigative game journalism created by Chris Bratt and Anni Sayers), Valve employees, both former and current, say it's resulted in a workplace two of them compared to The Lord of The Flies.

tptacek 15 July 2025
This was good, but the one thing I most wanted to know about what it's like building new products inside of OpenAI is how and how much LLMs are involved in their building process.
reducesuffering 15 July 2025
"Safety is actually more of a thing than you might guess if you read a lot from Zvi or Lesswrong. There's a large number of people working to develop safety systems. Given the nature of OpenAI, I saw more focus on practical risks (hate speech, abuse, manipulating political biases, crafting bio-weapons, self-harm, prompt injection) than theoretical ones (intelligence explosion, power-seeking). That's not to say that nobody is working on the latter, there's definitely people focusing on the theoretical risks. But from my viewpoint, it's not the focus."

This paragraph doesn't make any sense. If you read a lot of Zvi or LessWrong, the misaligned intelligence explosion is the safety risk you're thinking of! So readers "guesses" are actually right that OpenAI isn't really following Sam Altman's:

"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could."[0]

[0] https://blog.samaltman.com/machine-intelligence-part-1

troupo 15 July 2025
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.

To quote Jonathan Nightingale from his famous thread on how Google sabotaged Mozilla [1]:

--- start quote ---

The question is not whether individual sidewalk labs people have pure motives. I know some of them, just like I know plenty on the Chrome team. They’re great people. But focus on the behaviour of the organism as a whole. At the macro level, google/alphabet is very intentional.

--- end quote ---

Replace that with OpenAI

[1] https://archive.is/2019.04.15-165942/https://twitter.com/joh...

vonneumannstan 15 July 2025
>Safety is actually more of a thing than you might guess

Considering all the people who led the different safety teams have left or been fired, Superalignment has been a total bust and the various accounts from other employees about the lack of support for safety work I find this statement incredibly out of touch and borderline intentionally misleading.

chvid 16 July 2025
This stuff:

- The company was a little over 1,000 people. One year later, it is over 3,000.

- Changes direction on a dime.

- Very secretive place.

With the added "everything is a rounding error compared to GPU cost" and "this creates a lot of strange-looking code because there are so many ways you can write Python".

Is not something that is going to last.

retornam 16 July 2025
Doesn't it bother anybody that their product heavily relies on FastAPI according to this post yet they haven't donated to the project or aren't listed as sponsors?

https://github.com/sponsors/tiangolo#sponsors

https://github.com/fastapi/fastapi?tab=readme-ov-file#sponso...

theletterf 15 July 2025
For a company that has grown so much in such a short time, I continue to be surprised by its lack of technical writers. Saying docs could be better is an euphemism, but I still can't find fellow tech writers working there. Compare this with Anthropic and its documentation.

I don't know what's the rationale for not hiring tech writers other than nobody suggesting it yet, which is sad. Great dev tools require great docs, and great docs require teams that own them and grow them as a product.

csomar 16 July 2025
> Good ideas can come from anywhere, and it's often not really clear which ideas will prove most fruitful ahead of time.

Is that why they have a dozens of different models?

> Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering.

I don't think the Sam/Board drama confirms this.

> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.

Did you thank your OpenAI overlords for letting you access their sacred latest models?

+-+-

This reads like an Ad for Open AI or an attempt by the author to court them again? I am not sure how anyone can take his words seriously.

dowager_dan99 16 July 2025
Wild that OpenAI is changing so much that you can post about how things have radically changed in a year, and consider yourself a long-timer after < 16 months. I'm highly skeptical that an org this big is based on merit and there wasn't a lot of political maneuvering. You can have public politics or private politics, but no politics doesn't exist - at least after you hit <some> number of people where "some" is definitely < the size of OpenAI. All I hear about OpenAI is politics these days,
jjani 15 July 2025
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in. There's an API you can sign up and use–and most of the models (even if SOTA or proprietary) tend to quickly make it into the API for startups to use.

The comparison here should clearly be with the other frontier model providers: Anthropic, Google, and potentially Deepseek and xAI.

Comparing them gives the exact opposite conclusion - OpenAI is the only model provider that gates API access to their frontier models behind draconic identity verification (also, Worldcoin anyone?). Anthropic and Google do not do this.

OpenAI hides their model's CoT (inference-time compute, thinking). Anthropic to this day shows their CoT on all of their models.

Making it pretty obvious this is just someone patting themselves on the back and doing some marketing.

simonw 15 July 2025
Whoa, there is a ton of interesting stuff in this one, and plenty of information I've never seen shared before. Worth spending some time with it.
fogbeak 16 July 2025
Absolutely hilarious to assert that "everyone at OpenAI is trying to do the right thing" and then compare it to Los Alamos, the creators of the nuclear bomb.
jordanmorgan10 15 July 2025
I’m at a point my life and career where I’d never entertain working those hours. Missed basketball games, seeing kids come home from school, etc. I do think when I first started out, and had no kiddos, maybe some crazy sprints like that would’ve been exhilarating. No chance now though
fidotron 15 July 2025
> There's a corollary here–most research gets done by nerd-sniping a researcher into a particular problem. If something is considered boring or 'solved', it probably won't get worked on.

This is a very interesting nugget, and if accurate this could become their Achilles heel.

frankfrank13 15 July 2025
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing

I doubt many people would say something contrary to this about their (former) colleagues, which means we should always take this with a (large) grain of salt.

Do I think (most) AT&T employees wanted to let the NSA spy on us? Probably not. Google engineers and ICE? Palantir and.. well idk i think everyone there knows what Palantir does.

JonathanRaines 15 July 2025
Fascinating that you chose to compare OpenAI's culture to Los Alamos. I can't tell if you're hinting AI is as world ending as nuclear weapons or not.
paul7986 16 July 2025
I now have developed a hate with a small sprinkle of love relationship with AI.

This past week I canceled my $20 subscription to GPT, advocated my friends do the same (i got them hooked) and just will be using Gemini from now on. It can create AI maps instantly for road trip travel routes to planning creek tubing trips and more. GPT does not do maps and I was paying $20 for it while Gemini is free? Bye!

Further and more important this guy says in his blog he is happy to help with the destruction of our (white collar) society that will cause many MANY financial to emotional pain while he lives high off the hog? Impending 2030 depression .. 100 years later Im unfortunately betting!

Now AI could indeed help us cure disease but if the major are destitute while a few hundreds or so live high of the hog the benefits of AI are canceled out.

AI can definitely do the job ten people use to yet NOW its just one person typing into a text prompt to complete the tasks ten use.

Why are we here..sprinting towards this goal of destruction let China destory themselves!!!

paxys 15 July 2025
> An unusual part of OpenAI is that everything, and I mean everything, runs on Slack.

Not that unusual nowadays. I'd wager every tech company founded in the last ~10 years works this way. And many of the older ones have moved off email as well.

Vektorceraptor 16 July 2025
My biggest problem with these new companies is their core philosophy. First, these companies generate their own demand — natural demand for their products rarely exists. Therefore, they act more like sellers than developers. Second, they always follow the same maxim: "What's the next logical step?" This naturally follows from the first premise, because this allows you to ignore everything "real". You are simply bound to logic. They have no "problems" to solve, yet they offer you solutions - simply as a logical consequence of their own logic. Has anyone ever actually asked if coders would use agents if it meant losing their jobs? Thirdly, this naturally brings to light the B2B philosophy. The customer is merely a catalyst that will eventually become superfluous. Fourth, the same excuse and ignorance of the form "(we don't know what we are doing, but) time will tell". What if time tells you "this is bad and you should and could have known better?"
dcreater 15 July 2025
This is silicon valley culture on steroids: I really have to question if it is positive for any involved party. Codex almost has no mindshare and rightly so. It's a textbook also ran, except it came from the most dominant player and was outpaced by Claude code on the order of weeks.

Why go through all that? Instead what would have been a much better scenario is openai carefully assessing different approaches to agentic coding and releasing a more fully baked product with solid differentiation. Even Amazon just did that with Kiro

motbus3 16 July 2025
I didn't find any surprises reading this post.

If anything about OpenAI which should bothers people is how they fake to be blind to the consequences because of "the race". Leveraging the decision IF and WHAT should be done to the top heads only never worked well.

nilirl 16 July 2025
Are any of these causal to OpenAI's success? Or are they incidental? You can throw all of this "culture" into an org but I doubt it'd do anything without the literal world-changing technology the company owns.
segalord 16 July 2025
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement.

That is literally how openAI gets data for fine-tuning it's models, by testing it on real users and letting them supply data and use cases. (tool calling, computer use, thinking, all of these were championed by people outside and they had the data)

nembal 15 July 2025
wham. thanks for sharing anecdotal episodes from OAI's inner mecahnism from an eng perspective. I wonder if OAI wouldn't be married to Azure would the infra be more resilient, require less eng effort to invent things to just run (at scale).

What i haven't seen much is the split between eng and research and how people within the company are thinking about AGI and the future, workforce, etc. Is it the usual SF wonderland or is there an OAI specific value alignment once someone is working there.

ThouYS 15 July 2025
These one or two year tenures.. I don't know man
gsf_emergency_2 15 July 2025
If you'd like some "objective" insights into how bottoms-up innovation at OpenAI works..

a research manager there coauthored this under-hyped book: https://engineeringideas.substack.com/p/review-of-why-greatn...

theGnuMe 18 July 2025
>An unusual part of OpenAI is that everything, and I mean everything, runs on Slack. There is no email. I maybe received ~10 emails in my entire time there. If you aren't organized, you will find this incredibly distracting. If you curate your channels and notifications, you can make it pretty workable.

This is super interesting. I work in a group where everything is on slack and some pieces are/were super hard. So much so that I want an AI assistant that can manage my slack feed etc... I feel like an AI bot/slack integration is a thing that needs to be done well.

LZ_Khan 15 July 2025
This is just the exact same culture as Deepmind minus the "everything on Slack" bulletpoint.
ec109685 17 July 2025
No releases since June 13th: https://help.openai.com/en/articles/11428266-codex-changelog

It seems like Claude code via screen / multiple terminal windows is strictly better than the Codex approach given it keeps the developer in the loop.

Only advantage of async Codex is ability to carry on when using a different device like your phone to code.

sebslomski 16 July 2025
Good writing, enjoyed that article. Also I guess it looks like there was more time spent writing this article than actually working at OpenAI? 1 year tenure and a paternity leave?
viccis 15 July 2025
>It's hard to imagine building anything as impactful as AGI

>...

>OpenAI is also a more serious place than you might expect, in part because the stakes feel really high. On the one hand, there's the goal of building AGI–which means there is a lot to get right.

I'm kind of surprised people are still drinking this AGI Koolaid

upghost 15 July 2025
Granted the "OpenAI is not a monolith" comment, interesting that use of AI assisted coding was a curious omission from the article -- no mention if encouraged or discouraged.
david_shi 15 July 2025
Python monorepo is the biggest surprise in this whole article
daxfohl 16 July 2025
What do people not like about Azure IAM? That's the one I'm most familiar with, and I've always thought it was decent, pretty vanilla.

When I go to AWS it looks similar except role assignments can't be scoped, so needs more duplication and maintenance. In that way Azure seems nicer. In everything else, it seems pretty equivalent.

But I see it catching flak occasionally on HN, so curious what others dislike.

d--b 16 July 2025
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing

Of course they are. People in orgs like that are passionate, they want to work on the tech cause LLMs are once-in-a-lifetime tech breakthrough. But they don’t realize enough that they’re working for bad people. Ultimately all of that tech is in the hands of Altman, and that guy hasn’t proven to be the saint he hopes to become.

cess11 15 July 2025
20 years from now, the only people who will remember how much you worked is your family, especially your kids.

Seems like an awful place to be.

kylereeve 16 July 2025
I'm gonna be pedantic, shouldn't it be "bottom-up" instead of "bottoms-up"?
VirusNewbie 16 July 2025
Interesting that so many folks from Meta joined OpenAI - but Meta wasn't really able to roll its own competitive foundational model, so is that a bad sign?

Kind of interesting that folks aren't impressed by Azure's offering. I wonder if OpenAI is handicapped by that as well, compared to being on AWS or GCP.

breadwinner 16 July 2025
> What's funny about this is there are exactly three services that I would consider trustworthy: Azure Kubernetes Service, CosmosDB (Azure's document storage), and BlobStore.

CosmosDB is trustworthy? Everyone I know that used CosmosDB ended up rewriting their code because of throttling.

southernplaces7 17 July 2025
>Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special.

I clicked that other story just to see what that author said about his time at that company. The fundamental urge is to slap some bloody sense into him, because the only thing amazing about the article is just how obtuse it is.

Under all the glowy, wide-eyed applause for many aspects of how Palantir operates, he's not really internalizing just what kind of parasitic, corporate/government surveillance monster he was working for.

To put it simply, he speaks as if he were a useful idiot, praising selectively, glossing around a very serious moral swamp with mealy-mouthed categorizations about what's cool and not cool, ideological whitewash and generally failing to see the fucking forest for the trees. Palantir and its kind represent a disgusting possibility for the future of how society is managed by increasingly autocratic government tendencies and their hustling corporate allies. And I can think of no more chilling a notion than just what that management could entail for hundreds of millions of people under the guise of supposedly fighting all kinds of ambiguous or outright invented "bad actors"

With OpenAI, you get a lot of corporate mendacity and the usual stew of trying to game the rules for the company's own favor. However, underneath this is a product that's largely used by ordinary people for doing fairly ordinary things, even if many of them contribute to the burial of culture in the cheapest auto-generated sludge imaginable.

With Palantir, the fruits of the labor are... squarely something else entirely.

imiric 15 July 2025
Thanks for sharing.

One thing I was interested to read but didn't find in your post is: does everyone believe in the vision that the leadership has shared publicly, e.g. [1]? Is there some skepticism that the current path leads to AGI, or has everyone drunk the Kool-Aid? If there is some dissent, how is it handled internally?

[1]: https://blog.samaltman.com/the-gentle-singularity

paradite 16 July 2025
This is an incredibly fascinating read into how OpenAI works.

Some of the details seem rather sensitive to me.

I'm not sure if the essay is going to stay up for long, given how "secretive" OpenAI is claimed to be.

yahoozoo 15 July 2025
It would be interesting to read the memoirs of former OpenAI employees that dive into whether they thought the company was on the right track towards AGI. Of course, that’s an NDA violation at best.
ishita159 15 July 2025
this post was such a brilliant read. to read about how they still have a YC-style startup culture, are meritocratic, and people get to work on things they find interesting.

as an early stage founder, i worry about the following a lot.

- changing directions fast when i lose conviction - things breaking in production - and about speed, or the lack of it

I learned to actually not worry about the first two.

But if OpenAI shipped Codex in 7 weeks, small startups have lost the speed advantage they had. Big reminder to figure out better ways to solve for speed.

mehulashah 16 July 2025
While their growth is faster and technology different, the atmosphere feels very much like AWS back in 2014. I stayed for 8 years because I enjoyed it so much.
randometc 15 July 2025
What’s the GTM role referenced a couple of times in the post?
Havoc 15 July 2025
Interesting read!

Discounting Chinese labs entirely for agi seems like a misstep though. I find it hard to believe there won’t be at least a couple contenders

sarthaksoni 16 July 2025
Great read! As a software engineer sitting here in India, it feels like a privilege to peek inside how OpenAI works. Thanks for sharing!
noname120 16 July 2025
Does Sa… uh OpenAI still do stock clawbacks from employees who say negative things about the company after leaving?
throwawayohio 15 July 2025
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.

I appreciate where the author is coming from, but I would have just left this part out. If there is anything I've learned during my time in tech (ESPECIALLY in the Bay Area) it's that the people you didn't meet are absolutely angling to do the wrong thing(TM).

maxnevermind 16 July 2025
What I really wanted to know if OpenAI(and other labs for that matter) actually use their own products and not just casually but make LLM a core of how they operate. For example: using LLM for coding in prod, training/fine-tuning internal models for aligning on the latest updates, finding answer etc. Do they put their money where their mouth is, do LLMs help with productivity? There is no mention of it in the article, so I guess they don't?
tehnub 16 July 2025
>As I see it, the path to AGI is a three-horse race right now: OpenAI, Anthropic, and Google.

Sleeping on Keen Technology I see

hoseja 16 July 2025
>As a result, OpenAI is a very secretive place.

The choice of name continues providing incredible amusement.

nsoonhui 16 July 2025
He joined last year May and left recently. About one year of stay.

I wonder one year is enough time for programmers to understand codebase, let alone meaningfully contributing patches? But then we see that job hopping is increasing common, which results in the drop in product qualities. I wonder what values are the job hoppers adding to the company.

greatgib 17 July 2025
What is funny is that this guy looks like to be really thinking that they are working on AGI or that OpenAI is on the way to way despite that we all know that it is serious bullshit disconnected from reality.

So it let me wonder if employees are really believing that and drinking the kool-aid of their own marketing, or if this is just a communication move again.

teiferer 16 July 2025
>It's hard to imagine building anything as impactful as AGI,

Where is this AGI that you've built then? The reason for the very existence of that term is an acknowledgement that what's hyped today as AI isn't actually what AI used to mean, but the hype cycle VC money depends on using the term AI, so a new term was invented to denote the thing the old term used to denote. Do we need yet another term because AGI is about to get burned the same way?

> and LLMs are easily the technological innovation of the decade.

Sorry, what? I'm sure it feels that way from some corners of that particular tech bubble, but my 73 year old mom's life is not impacted by LLMs at all - well, except for when she opens her facebook feed once a month and gets blasted with tons of fake BS. Really something to be proud of for us as an industry? A tech breakthrough of the last decade that might have literally saved her life were mRNA vaccines though, and I could likely come up with more examples if I thought about it for more than 3 seconds.

breadwinner 16 July 2025
> As I see it, the path to AGI is a three-horse race right now: OpenAI, Anthropic, and Google.

Umm... I don't think Zuckerberg would agree with this statement.

suncemoje 15 July 2025
„the right people can make magic happen“

:-)

seydor 15 July 2025
seems like the whole thing was meant to be a jab at Meta
brcmthrowaway 15 July 2025
Lucky to be able to write this .. likely just vested with FU money!
carimura 16 July 2025
No way the newborn slept until 5:30 every morning.
ujkiolp 16 July 2025
this sounds awfully doctored
rasmul 16 July 2025
some thoughts

- no such thing as open ai as a decision unit, there are people at the top who decide + shareholder pressure

- a narcissist optimizes for himself and has no affective empathy: cruel, extremely selfish, image oriented, power hungry, liar etc.

- having no structure means having a hidden structure and hence real power of the few above, no accountability (narcissists love this too)

- framing this meritocracy is a positive framing, it is very easy to hide incompetence

- people want to do good, great, perfect naive and you can utilize this motivation to let them burn out and work for YOUR goals as a leader... another narcissist is great doing this to people and the richest man in the world per share price

all in all, having this kind of mindset is good for start ups or get the lucky punch, but AGI will be brought by Anthropic, Google and Ilya :) you will not have series of lucky punches, you have to have a direction

I think Sam Altman, a terrible narcissist, uses open AI to feel great and he has no strategy but using others for their own benefit, because narcissists dont care, they just care about their image and power... and that is why open AI goes down... bundling with Microsoft was a big red flag in the first place...

when i think of openAI, it is a bit like Netscape Navigator + Internet Explorer in one :)

Anthropic is like Safari + Brave

Google is like ... yeah :)

Ilya is like Opera/Vivaldi or so

krashidov 15 July 2025
> giant python monolith

this does not sound fun lol

dagorenouf 15 July 2025
Maybe I’m paranoid but this sounds too good to be true. Almost like something planted to help with recruiting after meta poached their best guys.
zzzeek 15 July 2025
> On the other hand, you're trying to build a product that hundreds of millions of users leverage for everything from medical advice to therapy.

... then the next paragraph

> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.

not if you're trying to replace therapists with chatbots, sorry

bagxrvxpepzn 15 July 2025
He joins a proven unicorn at its inflection point and then leaves mere days after hitting his vesting cliff. All of this "learning" and "experience" talk is sopping wet with cynicism.
tines 15 July 2025
Interesting how ChatGPT’s style of writing has made people start bolding so much text.
AIorNot 15 July 2025
I'm 50, worked at few cool places and lots of boring ones. to paraphrase, Tolstoy tends to be right -all happy families are similar and unhappy families are unhappy in unique ways

OpenAI is currently selected for the brightest and young excited minds, (and a lot of money).. bright, young (as in full of energy) and excited people will work well anywhere- esp if given a fair amount of autonomy.

Young people talking about how hard they worked is not a sign of a great corp culture, just a sign that they are in the super excited stage of their careers

In the long run who knows, I tend to view these companies as groups of like minded people and groups of people change and the dynamic changees overnight -so if they can sustain that culture sure, but who knows..

bawana 15 July 2025
This is a politically correct farewell letter. Obviously something we little people who need jobs have to resort to so the next HR manager doesn't think we are a risk to stock valuation. For a deeper understanding, read Empire of AI by Karen Hao. She defrocks Sam Altman to reveal he is just another human. Like Steve Jobs, he is an adept salesman appealing to the naïve altruistic sentiments of humans while maintaining his singular focus on scale. Not so different from the archetype of Rockefeller in his pursuit of monopoly through scale using any means, sam is no different than google which even forgot its own rallying cry ‘dont be evil’. Other actors in the story seem to have been infected by the same meme virus, leaving openAI for their own empires- Musk left after he and altman conflicted over who would be CEO.(birth of xAI). Amodei, his sister and others left to start anthropic. Sutskever left to start ‘safe something or other’(smacks of the same misdirection sam used when openAI formed as a nonprofit ) giving the idea of a nonprofit a mantle of evil since OPENAI has pivoted to profit.

The bottom line is that scaling requires money and the only way to get that in the private sector is to lure those with money with the temptation they can multiply their wealth.

Things could have been different in a world before financial engineers bankrupted the US (the crises of enron, salomon bros, 2008 mortgage debacle all added hundreds of billions to us debt as the govt bought the ‘too big to fail’ kool-aid and bailed out wall street by indenturing main street). Now 1/4 of our budget is simply interest payment on this debt. There is no room for govt spending on a moonshot like AI. This environment in 1960 would have killed Kennedy’s inspirational moonshot of going to the moon while it was still an idea in his head in his post coital bliss with Marilyn at his side.

Today our govt needs money just like all the other scrooge-infected players in the tower of debt that capitalism has built.

Ironically it seems china has a better chance now. It seems its release of deep seek and the full set of parameters is giving it a veneer of altruistic benevolence that is slightly more believable than what we see here in the west. China may win simply on thermodynamic grounds. Training and research in DL consumes terawatt hours and hundreds of thousands of chips. Not only are the US models on older architectures (10-100x more energy inefficient) but the ‘competition’ of multiple players in the US multiplies the energy requirements.

Would govt oversight have been a good thing? Imagine if General Motors, westinghouse, bell labs, and ford competed in 1940 each with their own manhattan project to develop nuclear weapons ? Would the proliferation of nuclear have resulted in human extinction by now?

Will AI’s contribution to global warming be just as toxic global thermonuclear war?

These are the questions that come to mind after Hao’s historic summary.

vouaobrasil 15 July 2025
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.

I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.

What keeping AI free for everyone is akin to is keeping an addictive drug free for everyone so that it can be sold in larger quantities later.

One can argue that some technology is beneficial. A mosquito net made of plastic immediately improves one's comfort if out in the woods. But AI doesn't really offer any immediate TRUE improvement of life, only a bit more convenience in a world already saturated in it. It's past the point of diminishing returns for true life improvement and I think everyone deep down inside knows that, but is seduced by the nearly-magical quality of it because we are instinctually driven to seek out advantags and new information.

smeeger 15 July 2025
> everyone I met there is actually trying to do the right thing

making human beings obsolete is not the right thing. nobody in openAI is doing the right thing.

in another part of the post he says safety teams work primarily on making sure the models dont say anything racist as well as limiting helpful tips on building weapons of terror… and that AGI safety is basically not a focus. i dont think this company should be allowed to exist. they dont have ANY right to threaten the existence and wellbeing of me and my kids!

solarized 15 July 2025
> As I see it, the path to AGI is a three-horse race right now: OpenAI, Anthropic, and Google. Each of these organizations are going to take a different path to get there based upon their DNA (consumer vs business vs rock-solid-infra + data).

Grok be like. okey. :))