Generative AI doesn't have a coherent understanding of the world

(news.mit.edu)

Comments

southernplaces7 15 November 2024
>Generative AI doesn't have a coherent understanding of the world

The very headline is based on a completely faulty assumption, that AI has any capacity for understanding at all, which it doesn't. That would require self-directed reasoning and self-awareness, both of which it lacks based on any available evidence. (though there's no shortage of irrational defenders here who somehow leap to say that there's no difference between consciousness in humans and the pattern matching of AI technology today, because they happened to have a "conversation" with ChatGPT and etc).

jmole 14 November 2024
None of us has a coherent view of the world, the map is not the territory..
penjelly 15 November 2024
LLMs in their current form seem to me like intuitive thought, like what we use when in conversation with friends over a beer. It seems like only one part of a future AI brain, there still needs to be another system for maintaining a world model, another for background planning, etc. These + vision and hearing models to take in new info about the world and we'll have something resembling human intelligence where all that data meets
h_tbob 14 November 2024
To be honest… it’s amazing it can have any understanding given the only “senses” it has are the 1024 inputs of d_model!

Imagine trying to understand the world if you were simply given books and books in a language you had never read… and you didn’t even know how to read or even talk!

So it’s pretty incredible it’s got this far!

upghost 15 November 2024
"MIT Researchers have yet to discover self-aware sentient dataset. In unrelated news, Sam Altman promises with another 7 trillion dollars he can turn lead into gold. 'For real this time,' claims Altman."
mediumsmart 14 November 2024
AI generates a world that is language mapped by word pointing to another word like we did?