To answer a few people at once: I did mention compensation as a factor in the post, but I didn't elaborate details, so easy to miss. Comp is important of course, but so are the other factors. It feels like I can't go for a day without reading about the cost of AI datacenters in the news, and I can do something about it.
> ...it's not just about saving costs – it's about saving the planet
There's something that doesn't sit right with me about this statement, and I'm not sure what it is. Are you sure you didn't just join for the money? (edit: cool problems, too)
Brendan, I'm a big fan of your book, and work.
I don't have a problem with you joining OpenAI; best of luck there!
However, I'm not sure your analysis is quite correct, in this case.
If OpenAI can mobilize X (giga)dollars to buy Y amounts of energy, your work there will not reduce X or Y, it will simply help them produce more "tokens" (or whatever "unit of AI") for a given amount of energy.
So in a sense you're helping make OpenAI tools better, more effective, but it's not helping reduce resource usage.
> She was worried about a friend who was travelling in a far-away city, with little timezone overlap when they could chat, but she could talk to ChatGPT anytime about what the city was like and what tourist activities her friend might be doing, which helped her feel connected. She liked the memory feature too, saying it was like talking to a person who was living there.
This seems rather sad. Is this really what AI is for?
And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.
Brendan can do whatever he wants. Hes that good. If anybody seriously needed to interview him 20+ times to figure it out, then the burden is now on them to not fuck it up.
Apparently, there's this guy who's really good at optimizing computer performance and makes a lot of money doing it. At the same time, he writes mediocre school essays that are actually a bit embarrassing. Guys, if you have the opportunity to land a very well-paid job, then do it. Take the money. Live your life. But please spare us the public self-castration.
If it's in your power, make sure user prompts and llm responses are never read, never analyzed and never used for training - not anonymized, not derived, not at all.
Why I Joined OpenAI
(brendangregg.com)97 points by SerCe 6 hours ago | 78 comments
Comments
There's something that doesn't sit right with me about this statement, and I'm not sure what it is. Are you sure you didn't just join for the money? (edit: cool problems, too)
However, I'm not sure your analysis is quite correct, in this case.
If OpenAI can mobilize X (giga)dollars to buy Y amounts of energy, your work there will not reduce X or Y, it will simply help them produce more "tokens" (or whatever "unit of AI") for a given amount of energy.
So in a sense you're helping make OpenAI tools better, more effective, but it's not helping reduce resource usage.
https://en.wikipedia.org/wiki/Jevons_paradox
How could she not know?
This seems rather sad. Is this really what AI is for?
And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.
> I'd been missing that human connection
At OpenAI.
You're in for a surprise buddy.
Just say you joined for the money and that Intel's stock didn't do a 10,000x run like Nvidia did and he completely missed it.
So the best chance at something like that again is OpenAI when they achieve a 1TN valuation with AGI.