OpenAI, Google and Anthropic are struggling to build more advanced AI

(bloomberg.com)

Comments

LASR 7 hours ago
Question for the group here: do we honestly feel like we've exhausted the options for delivering value on top of the current generation of LLMs?

I lead a team exploring cutting edge LLM applications and end-user features. It's my intuition from experience that we have a LONG way to go.

GPT-4o / Claude 3.5 are the go-to models for my team. Every combination of technical investment + LLMs yields a new list of potential applications.

For example, combining a human-moderated knowledge graph with an LLM with RAG allows you to build "expert bots" that understand your business context / your codebase / your specific processes and act almost human-like similar to a coworker in your team.

If you now give it some predictive / simulation capability - eg: simulate the execution of a task or project like creating a github PR code change, and test against an expert bot above for code review, you can have LLMs create reasonable code changes, with automatic review / iteration etc.

Similarly there are many more capabilities that you can ladder on and expose into LLMs to give you increasingly productive outputs from them.

Chasing after model improvements and "GPT-5 will be PHD-level" is moot imo. When did you hire a PHD coworker and they were productive on day-0 ? You need to onboard them with human expertise, and then give them execution space / long-term memories etc to be productive.

Model vendors might struggle to build something more intelligent. But my point is that we already have so much intelligence and we don't know what to do with that. There is a LOT you can do with high-schooler level intelligence at super-human scale.

Take a naive example. 200k context windows are now available. Most people, through ChatGPT, type out maybe 1500 tokens. That's a huge amount of untapped capacity. No human is going to type out 200k of context. Hence why we need RAG, and additional forms of input (eg: simulation outcomes) to fully leverage that.

iandanforth 7 hours ago
A few important things to remember here:

The best engineering minds have been focused on scaling transformer pre and post training for the last three years because they had good reason to believe it would work, and it has up until now.

Progress has been measured against benchmarks which are / were largely solvable with scale.

There is another emerging paradigm which is still small(er) scale but showing remarkable results. That's full multi-modal training with embodied agents (aka robots). 1x, Figure, Physical Intelligence, Tesla are all making rapid progress on functionality which is definitely beyond frontier LLMs because it is distinctly different.

OpenAI/Google/Anthropic are not ignorant of this trend and are also reviving or investing in robots or robot-like research.

So while Orion and Claude 3.5 opus may not be another shocking giant leap forward, that does not mean that there arn't giant shocking leaps forward coming from slightly different directions.

nutanc 21 minutes ago
Let's keep aside the hype. Let's define more advanced AI. With current architectures, this basically means better copying machines(don't mean this in a bad way and don't want a debate on this. This is just my opinion based on my usage). Basically everything in the Internet has been crammed into the weights and the companies are finding it hard to do two things:

1. Find more data.

2. Make the weights capture the data and reproduce.

In that sense we have reached a limit. So in my opinion we can do a couple of things.

1. App developers can understand the limits and build within the limits.

2. Researchers can take insights from these large models and build better AI systems with new architectures. It's ok to say transformers have reached a limit.

fsndz 21 minutes ago
Sam Altman might be wrong then?

Learning from data is not enough; there is a need for the kind of system-two thinking we humans develop as we grow. It is difficult to see how deep learning and backpropagation alone will help us model that. For tasks where providing enough data is sufficient to cover 95% of cases, deep learning will continue to be useful in the form of 'data-driven knowledge automation.' For other cases, the road will be much more challenging. https://www.lycee.ai/blog/why-sam-altman-is-wrong

ziofill 8 hours ago
I think it is a good thing for AI that we hit the data ceiling, because the pressure moves toward coming up with better model architectures. And with respect to a decade ago there's a much larger number of capable and smart AI researchers who are looking for one.
aresant 7 hours ago
Taking a hollistic view informed by a disruptive OpenAI / AI / LLM twitter habit I would say this is AI's "What gets measured gets managed" moment and the narrative will change

This is supported by both general observations and recently this tweet from an OpenAI engineer that Sam responded to and engaged ->

"scaling has hit a wall and that wall is 100% eval saturation"

Which I interpert to mean his view is that models are no longer yielding significant performance improvements because the models have maxed out existing evaluation metrics.

Are those evaluations (or even LLMs) the RIGHT measures to achieve AGI? Probably not.

But have they been useful tools to demonstrate that the confluence of compute, engineering, and tactical models are leading towards signifigant breathroughts in artificial (computer) intelligence?

I would say yes.

Which in turn are driving the funding, power innovation, public policy etc needed to take that next step?

I hope so.

(1) https://x.com/willdepue/status/1856766850027458648

Animats 7 hours ago
"While the model was initially expected to significantly surpass previous versions of the technology behind ChatGPT, it fell short in key areas, particularly in answering coding questions outside its training data."

Right. If you generate some code with ChatGPT, and then try to find similar code on the web, you usually will. Search for unusual phrases in comments and for variable names. Often, something from Stack Overflow will match.

LLMs do search and copy/paste with idiom translation and some transliteration. That's good enough for a lot of common problems. Especially in the HTML/Javascript space, where people solve the same problems over and over. Or problems covered in textbooks and classes.

But it does not look like artificial general intelligence emerges from LLMs alone.

There's also the elephant in the room - the hallucination/lack of confidence metric problem. The curse of LLMs is that they return answers which are confident but wrong. "I don't know" is rarely seen. Until that's fixed, you can't trust LLMs to actually do much on their own. LLMs with a confidence metric would be much more useful than what we have now.

headcanon 7 hours ago
I don't see a problem with this, we were inevitably going to reach some kind of plateau with existing pre-LLM-era data.

Meanwhile, the existing tech is such a step change that industry is going to need time to figure out how to effectively use these models. In a lot of ways it feels like the "digitization" era all over again - workflows and organizations that were built around the idea humans handled all the cognitive load (basically all companies older than a year or two) will need time to adjust to a hybrid AI + human model.

jmward01 6 hours ago
Every negative headline I see about AI hitting a wall or being over-hyped makes me think of the early 2000's with that new thing the 'internet' (yes, I know the internet is a lot older than that). There is little doubt in my mind that ten years from now nearly every aspect of life will be deeply connected to AI just like the internet took over everything in the late 90's and early 2000's and is now deeply connected to everything now. I'd even hazard to say that AI could be more impactful.
kklisura 7 hours ago
Not sure if related or not, Sam Altman, ~12hrs ago: there is no wall [1]

[1] https://x.com/sama/status/1856941766915641580

thousand_nights 8 hours ago
not long ago these people would have you believe that a next word predictor trained on reddit posts would somehow lead to artificial general superintelligence
WorkerBee28474 8 hours ago
> OpenAI's latest model ... failed to meet the company's performance expectations ... particularly in answering coding questions outside its training data.

So the models' accuracies won't grow exponentially, but can still grow linearly with the size of the training data.

Sounds like DataAnnotation will be sending out a lot more LinkedIn messages.

summerlight 52 minutes ago
I guess this is somewhat expected? The current frontier models probably already have exhausted most of the entropy in the training data accumulated over decades and the new training data is very sparse. And the current mainstream architectures are not capable of sophisticated searching and planning, essential aspects for generating new entropy out of thin air. o1 was an interesting attempt to tackle this problem, but we probably still have a long way to go.
pluc 7 hours ago
They've simply run out of data to use to fabricate legitimate-looking guesses. They can't create anything that doesn't already exist.
Dr_Birdbrain 1 hour ago
I don’t know how to square this with the recent statement by Dario Amodei (Anthropic CEO) on the Lex Fridman podcast saying that in his opinion the scaling hypothesis still has plenty of room to run.
irrational 8 hours ago
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

Havoc 46 minutes ago
The new Gemini just hit some good benchmarks.

This smells like it’s mostly based on OAI having a bit of bad luck with next model rather than a fundamental slowdown / barrier.

They literally just made a decent sized leap with o1

sssilver 3 hours ago
One thing that makes the established AIs less ideal for my (programming) use-case is that the technologies I use quickly evolve past whatever the published models "learn".

On the other hand, a lot of these frameworks and languages have relatively decent and detailed documentation.

Perhaps this is a naive question, but why can't I as a user just purchase "AI software" that comes with a large pre-trained model to which I can say, on my own machine, "go read this documentation and help me write this app in this next version of Leptos", and it would augment its existing model with this new "knowledge".

danjl 7 hours ago
Where will the training data for coding come from now that Stack Overflow has effectively been replaced? Will the LLMs share fixes for future problems? As the world moves forward, and the amount of non-LLM generated data decreases, will LLMs actually revert their advancements and become effectively like addled brains, longing for the "good old times"?
GiorgioG 46 minutes ago
It’s about time the hype starts to die down. LLMs are brilliant for small bits of grunt work in software. It is not however doing any actual reasoning.
glial 2 hours ago
I think self-consistency is a critical feature of LLMs or any AI that's currently missing. It's one of the core attributes of truth [1], in addition to the order and relationship of statements corresponding to the order and relationship of things in the world. I wonder if some kind of hierarchical language diffusion model would be a way to implement this -- where text is not produced sequentially, but instead hierarchically, with self-consistency checks at each level.

[1] https://en.wikipedia.org/wiki/Coherence_theory_of_truth

benopal64 8 hours ago
I am not sure how these large companies think they will reach "greater-than-human" intelligence any time soon if they do not create systems that financially incentivize people to sell their knowledge labor (unstable contracting gigs are not attractive).

Where do these large "AI" companies think the mass amounts of data used to train these models come from? People! The most powerful and compact complex systems in existence, IMO.

svara 7 hours ago
The recent big success in deep learning have all been to a large part successes in leveraging relatively cheaply available training data.

AlphaGo - self-play

AlphaFold - PDB, the protein database

ChatGPT - human knowledge encoded as text

These models are all machines for clever interpolation in gigantic training datasets.

They appear to be intelligent, because the training data they've seen is so vastly larger than what we've seen individually, and we have poor intuition for this.

I'm not throwing shade, I'm a daily user of ChatGPT and find tremendous and diverse value in it.

I'm just saying, this particular path in AI is going to make step-wise improvements whenever new large sources of training data become available.

I suspect the path to general intelligence is not that, but we'll see.

fallat 6 hours ago
What a stupid piece. We are making leaps every 6 months still. Tell me this when there are no developments for 3 years.
devit 4 hours ago
It seems obvious to me that Common Crawl plus Github public repositories have more than an enough data to train an AI that is as good as any programmer (at tasks not requiring knowledge of non-public codebases or non-public domain knowledge).

So the problem is more in the algorithm.

Bjorkbat 1 hour ago
It's kind of, I don't know, "weird", observing how there's all these news outlets reporting on how essentially every up-and-coming model has not performed as expected, while all the employees at these labs haven't changed their tune in the slightest.

And there's a number of reasons why, mostly likely being that they've found other ways to get improvements out of AI models, so diminishing returns on training aren't that much of a problem. Or, maybe the leakers are lying, but I highly doubt that considering the past record of news outlets reporting on accurate leaked information.

Still though, it's interesting how basically ever frontier lab created a model that didn't live up to expectations, and every employee at these labs on Twitter has continued to vague-post and hype as if nothing ever happened.

It's honestly hard to tell whether or not they really know something we don't, or if they have an irrational exuberance for AGI bordering on cult-like, and they will never be able to mentally process, let alone admit, that something might be wrong.

xyst 6 hours ago
Many late investors in the genAI space about to be bag holders
tippytippytango 2 hours ago
There’s only so much you can do when you train on the data instead of the processes that created that data.
the_king 7 hours ago
Anthropic's latest 3.5 sonnet is a cut above GPT-4 and 4.0. And if someone had given it to me and said, here's GPT-4.5, I would have been very happy with it.
czhu12 4 hours ago
If it becomes obvious that LLM's have a more narrow set of use cases, rather than the all encompassing story we hear today, then I would bet that the LLM platforms (OpenAI, Anthropic, Google, etc) will start developing products to compete directly with applications that supposed to be building on top of them like Cursor, in an attempt to increase their revenue.

I wonder what this would mean for companies raising today on the premise of building on top of these platforms. Maybe the best ones get their ideas copied, reimplemented, and sold for cheaper?

We already kind of see this today with OpenAI's canvas and Claude artifacts. Perhaps they'll even start moving into Palantir's space and start having direct customer implementation teams.

It is becoming increasing obvious that LLM's are quickly becoming commoditized. Everyone is starting to approach the same limits in intelligence, and are finding it hard to carve out margin from competitors.

Most recently exhibited by the backlash at claude raising prices because their product is better. In any normal market, this would be totally expected, but people seemed shocked that anyone would charge more than the raw cost it would take to run the LLM itself.

https://x.com/ArtificialAnlys/status/1853598554570555614

shmatt 8 hours ago
Time to start selling my "probabilistic syllable generators are not intelligence" t shirts
LarsDu88 6 hours ago
Curves that look exponential in virtually all cases turn out to be logarithmic.

Certain OpenAI insiders must have known this for a while, hence Ilya Sutskever's new company in Israel

gchamonlive 1 hour ago
We should put a model in an actual body and let it in the world to build from experiences. Inference is costly though, so the robot would interact during a period and update it's model during another period, flushing the context window (short term memory) into its training set (long term memory).
Veuxdo 7 hours ago
> They are also experimenting with synthetic data, but this approach has its limitations.

I was really looking forward to using "synthetic data" euphemistically during debates.

zusammen 7 hours ago
I wonder how much this has to do with a fluency plateau.

Up to a certain point, a conditional fluency stores knowledge, in the sense that semantically correct sentences are more likely to be fluent… but we may have tapped out in that regard. LLMs have solved language very well, but to get beyond that has seemed, thus far, to require RLHF, with all the attendant negatives.

superjose 4 hours ago
I'm more on the camp that these techs don't need to be perfect, but they need to be practical enough.

And I think the latter is good enough for us to do exciting things.

nomendos 6 hours ago
"Eureka"!?

At the very early phase of the boom I was among a very few who knew and predicted this (usually most free and deep thinking/knowledgeable). Then my prediction got reinforced by the results. One of the best examples was with one of my experiments that all today's AI's failed to solve tree serialization and de-serialization in each of the DFS(pre-order/in-order/post-order) or BFS(level-order) which is 8 algorithms (2x4) and the result was only 3 correct! Reason is "limited training inputs" since internet and open source does not have other solutions :-) .

So, I spent "some" time and implemented all 8, which took me few days. By the way this proves/demonstrates that ~15-30min pointless leetcode-like interviews are requiring to regurgitate/memorize/not-think. So, as a logical hard consequence there will.has-to be a "crash/cleanup" in the area of leetcode-like interviews as they will just be suddenly proclaimed as "pointless/stupid"). However, I decided not to publish the rest of the 5 solutions :-)

This (and other experiments) confirms hard limits of the LLM approach (even when used with chain-of-thought). Increasing the compute on the problem will produce increasingly smaller and smaller results (inverse exponential/logarithmic/diminishing-returns) = new AGI approach/design is needed and to my knowledge majority of the inve$tment (~99%) is in LLM, so "buckle up" at-some-point/soon?

Impacts and realities; LLM shall "run it's course" (produce some products/results/$$$, get reviewed/$corrected) and whoever survives after that pruning shall earn money on those products while investing in the new research to find new AGI design/approach (which could take quite a long time,... or not). NVDA is at the center of thi$ and time-wise this peak/turn/crash/correction is hard to predict (although I see it on the horizon and min/max time can be estimated). Be aware and alert. I'll stop here and hold my other number of thoughts/opinions/ideas for much deeper discussion. (BTW I am still "full in on NVDA" until,....)

guluarte 7 hours ago
Well, there have been no significant improvements to the GPT architecture over the past few years. I'm not sure why companies believe that simply adding more data will resolve the issues
nerdypirate 8 hours ago
"We will have better and better models," wrote OpenAI CEO Sam Altman in a recent Reddit AMA. "But I think the thing that will feel like the next giant breakthrough will be agents."

Is this certain? Are Agents the right direction to AGI?

yalogin 5 hours ago
I do wonder how quickly llms will become a commodity AI instrument just like any other AI out there. If so what happens to openAI
wg0 13 November 2024
AI winter is here. Almost.
k__ 3 hours ago
But AGI is always right around the corner?

I don't get it...

rubiquity 5 hours ago
> Amodei has said companies will spend $100 million to train a bleeding-edge model this year

Is it just me or does $100 million sound like it's on the very, very low end of how much training a new model costs? Maybe you can arrive within $200 million of that mark with amortization of hardware? It just doesn't make sense to me that a new model would "only" be $100 million when AmaGooBookSoft are spending tens of billions on hardware and the AI startups are raising billions every year or two.

mrandish 1 hour ago
Based on recent rumblings about AI scaling hitting a wall, of which this article is perhaps the most visible - and in a high-reach financial publication, I'm considering increasing my estimated probability we might see a major market correction next year (and possibly even a bubble collapse). (example: "CONFIRMED: LLMs have indeed reached a point of diminishing returns" https://garymarcus.substack.com/p/confirmed-llms-have-indeed...).

To be clear, I don't think a near-term bubble collapse is likely but I'm going from 3% to maybe ~10%. Also, this doesn't mean I doubt there's real long-term value to be delivered or money to be made in AI solutions. I'm thinking specifically about those who've been speculatively funding the massive build out of data centers, energy and GPU supply expecting near-term demand to continue scaling at the recent unprecedented rates. My understanding is much of this is being funded in advance of actual end-user demand at these elevated levels and it is being funded either by VC money or debt by parties who could struggle to come up with the cash to pay for what they've ordered if either user demand or their equity value doesn't continue scaling as expected.

Admittedly this scenario assumes that these investment commitments are sufficiently speculative and over-committed to create bubble dynamics and tipping points. The hypothesis goes like this: the money sources who've over-committed to lock up scarce future supply in the expectation it will earn outsize returns have already started seeing these warning signs of efficiency and/or progress rates slowing which are now hitting mainstream media. Thus it's possible there is already a quiet collapse beginning wherein the largest AI data center GPU purchasers might start trying to postpone future delivery schedules and may soon start trying to downsize or even cancel existing commitments or try to offload some of their future capacity via sub-leasing it out before it even arrives, etc. Being a dynamic market, this could trigger a rapidly snowballing avalanche of falling prices for next-year AI compute (which is already bought and sold as a commodity like pork belly futures).

Notably, there are now rumors claiming some of the largest players don't currently have the cash to pay for what they've already committed to for future delivery. They were making calculated bets they'd be able to raise or borrow that capital before payments were due. Except if expectation begins to turn downward, fresh investors will be scarce and banks will reprice a GPU's value as loan collateral down to pennies on the dollar (shades of the 2009 financial crisis where the collateral value of residential real estate assets was marked down). As in most bubbles, cheap credit is the fuel driving growth and that credit can get more expensive very quickly - which can in turn trigger exponential contagion effects causing the bubble to pop. A very different kind of "Foom" than many AI financial speculators were betting on! :-)

So... in theory, under this scenario sometime next year NVidia/TSMC and other top-of-supply-chain companies could find themselves with excess inventories of advanced node wafers because a significant portion of their orders were from parties who no longer have access to the cheap capital to pay for them. And trying to sue so many customers for breach can take a long time and, in a large enough sector collapse, be only marginally successful in recouping much actual cash.

I'd be interested in hearing counter-arguments (or support) for the impossibility (or likelihood) of such a scenario.

aurareturn 13 November 2024
Is there any timeline on AI winters and if each winter gets shorter and shorter as time increases?
non- 7 hours ago
Honestly could use a breather from the recent rate of progress. We are just barely figuring out how to interact with the models we have now. I'd bet there are at least 100 billion-dollar startups that will be built even if these labs stopped releasing new models tomorrow.
quantum_state 3 hours ago
Hope this would be a constant reminder that brute force can only get one that far, though it may still be useful when it is. With lots of intuition gained, it’s time to ponder things a bit more deeply.
atomsatomsatoms 8 hours ago
At least they can generate haikus now
cryptica 3 hours ago
It's interesting the way things turned out so far with LLMs, especially from the perspective of a software engineer. We are trained to keep a certain skepticism when we see software which appears to be working because, ultimately, the only question we care about is "Does it meet user requirements?" and this is usually framed in terms of users achieving certain goals.

So it's interesting that when AI came along, we threw caution to the wind and started treating it like a silver bullet... Without asking the question of whether it was applicable to this goal or that goal...

I don't think anyone could have anticipated that we could have an AI which could produce perfect sentences, faster than a human, better than a human but which could not reason. It appears to reason very well, better than most people, yet it doesn't actually reason. You only notice this once you ask it to accomplish a task. After a while, you can feel how it lacks willpower. It puts into perspective the importance of willpower when it comes to getting things done.

In any case, LLMs bring us closer to understanding some big philosophical questions surrounding intelligence and consciousness.

wslh 7 hours ago
It sounds a bit sci-fi, but since these models are built on data generated by our civilization, I wonder if there's an epistemological bottleneck requiring smarter or more diverse individuals to produce richer data. This, in turn, could spark further breakthroughs in model development. Although these interactions with LLMs help address specific problems, truly complex issues remain beyond their current scope.

With my user hat on, I'm quite pleased with the current state of LLMs. Initially, I approached them skeptically, using a hackish mindset and posing all kinds of Turing test-like questions. Over time, though, I shifted my focus to how they can enhance my team's productivity and support my own tasks in meaningful ways.

Finally, I see LLMs as a valuable way to explore parts of the world, accommodating the reality that we simply don’t have enough time to read every book or delve into every topic that interests us.

user90131313 6 hours ago
AI market top very soon
wildermuthn 5 hours ago
Simply put, AGI requires more data: qualia.
Oras 7 hours ago
I think Meta will have upper hand soon with the release of their glasses. If they managed to make it a daily use glass, and paid users to record and share their life, then they will have data no one else has now. Mix of vision, audio, and physics.
polskibus 6 hours ago
In other news, Altman said AGI is coming next year https://www.tomsguide.com/ai/chatgpt/sam-altman-claims-agi-i...
Timber-6539 4 hours ago
Direct quote from the article: "The companies are facing several challenges. It’s become increasingly difficult to find new, untapped sources of high-quality, human-made training data that can be used to build more advanced AI systems."

The irony here is astounding.

m3kw9 4 hours ago
Hold your horses, OpenAI just came out with o1preview 2 months ago, showing what test time computer can do
lobochrome 2 hours ago
Isn’t this just the expected delay from the respin of Blackwell?
cubefox 12 hours ago
It's very strange this got so few upvotes. The scoop by The Information a few days ago, which came to similar conclusions, was also ignored on HN. This is arguably rather big news.
12_throw_away 6 hours ago
Well shoot. It's not like it was patently obvious that this would happen before the industry started guzzling electricity and setting money on fire, right? [1]

[1] https://dl.acm.org/doi/10.1145/3442188.3445922

kaibee 6 hours ago
Not sure where the OP to the comment I meant to reply to is, but I'll just add this here.

> I suspect the path to general intelligence is not that, but we'll see.

I think there's three things that a 'true' general intelligence has which is missing from basic-type-LLMs as we have now.

1. knowing what you know. <basic-LLMs are here>

2. knowing what you don't know but can figure out via tools/exploration. <this is tool use/function calling>

3. knowing what can't be known. <this is knowing that halting problem exists and being able to recognize it in novel situations>

(1) From an LLM's perspective, once trained on corpus of text, it knows 'everything'. It knows about the concept of not knowing something (from having see text about it), (in so far as an LLM knows anything), but it doesn't actually have a growable map of knowledge that it knows has uncharted edges.

This is where (2) comes in, and this is what tool use/function calling tries to solve atm, but the way function calling works atm, doesn't give the LLM knowledge the right way. I know that I don't know what 3,943,034 / 234,893 is. But I know I have a 'function call' of knowing the algorithm for doing long divison on paper. And I think there's another subtle point here: my knowledge in (1) includes the training data generated from running the intermediate steps of the long-division algorithm. This is the knowledge that later generalizes to being able to use a calculator (and this is also why we don't just give kids calculators in elementary school). But this is also why a kid that knows how to do long division on paper, doesn't seperately need to learn when/how to use a calculator, besides the very basics. Using a calculator to do that math feels like 1 step, but actually it does still have all of initial mechanical steps of setting up the problem on paper. You have to type in each digit individually, etc.

(3) I'm less sure of this point now that I've written out point (1) and (2), but that's kinda exactly the thing I'm trying to get at. Its being able to recognize when you need more practice of (1) or more 'energy/capital' for doing (2).

Consider a burger resturant. If you properly populated the context of a ChatGPT-scale model the data for a burger resturant from 1950, and gave it the kinda 'function calling' we're plugging into LLMs now, it could manage it. It could keep track of inventory, it could keep tabs on the employee-subprocesses, knowing when to hire, fire, get new suppliers, all via function calling. But it would never try to become McDonalds, because it would have no model of the the internals of those function-calls, and it would have no ability to investigate or modify the behaviour of those function calls.

jppope 4 hours ago
Just an observation. If the models are hitting the top of the S-curve, that might be why Sam Altman raised all the money for OpenAI... it might not be available if Venture Capitalists realize that the gains are close to being done
russellbeattie 5 hours ago
Go back a few decades and you'd see articles like this about CPU manufacturers struggling to improve processor speeds and questioning if Moore's Law was dead. Obviously those concerns were way overblown.

That doesn't mean this article is irrelevant. It's good to know if LLM improvements are going to slow down a bit because the low hanging fruit has seemingly been picked.

But in terms of the overall effect of AI and questioning the validity of the technology as a whole, it's just your basic FUD article that you'd expect from mainstream news.

dangw 3 hours ago
where the fuck is simonw in this thread

xd

bad_haircut72 8 hours ago
Im no Alan Turing but I have my own definition for AGI - when I come home one day and there's a hole under my sink with a note "Mum and Dad, I love you but I cant stand this life any more, Im running away to be a smoke machine in Hollywood - the dishwasher"
aaroninsf 8 hours ago
It's easy to be snarky at ill-informed and hyperbolic takes, but it's also pretty clear that large multi-modal models trained with the data we already have, are going to eventually give us AGI.

IMO this will require not just much more expansive multi-modal training, but also novel architecture, specifically, recurrent approaches; plus a well-known set of capabilities most systems don't currently have, e.g. the integration of short-term memory (context window if you like) into long-term "memory", either episodic or otherwise.

But these are as we say mere matters of engineering.

yobid20 4 hours ago
This was predicted. Ai isnt going to get any better.
Davidzheng 7 hours ago
Just because you guys want something to be true and can't accept the alternative and upvote it when it agrees with your view does not mean it is a correct view.