A popular local spot has a summary on google maps that says:
Vibrant watering hole with drinks & po' boys, as well as a jukebox, pool & electronic darts.
It doesn't serve po' boys, have a jukebox (though the playlists are impeccable), have pool, or have electronic darts. (It also doesn't really have drinks in the way this implies. It's got beer and a few canned options. No cocktails or mixed drinks.)
They got a catty one-star review a month ago for having a misleading description by someone who really wanted to play pool or darts.
I'm sure the owner reported it. I reported it. I imagine other visitors have as well. At least a month on, it's still there.
I'd say this isn't just an AI overview thing. It's a Google thing. Google will sometimes show inaccurate information and there is usually no way to correct it. Various "feedback" forms are mostly ignored.
I had to fight a similar battle with Google Maps, which most people believe to be a source of truth, and it took years until incorrect information was changed. I'm not even sure if it was because of all the feedback I provided.
I see Google as a firehose of information that they spit at me ("feed"), they are too big to be concerned about any inconsistencies, as these don't hurt their business model.
That's obviously broken but part of this is an inherent difficulty with names. One thing they could do would be to have a default question that is always present like "what other people named [_____] are there?"
That wouldn't solve the problem of mixing up multiple people. But the first problem most people have is probably actually that it pulls up a person that is more famous than who they were actually looking for.
I think Google does have some type of knowledge graph. I wonder how much AI model uses it.
Maybe it hits the graph, but also some kind of Google search, and then the LLM is like Gemini Flash Lite and is not smart enough to realize which search result goes with the famous person from the graph versus just random info from search results.
I imagine for a lot of names, there are different levels of fame and especially in different categories.
It makes me realize that my knowledge graph application may eventually have an issue with using first and last name as entity IDs. Although it is supposed to be for just an individual's personal info so I can probably mostly get away with it. But I already see a different issue when analyzing emails where my different screen names are not easily recognized as being the same person.
That is such a classic problem with Google (from long before AI).
I am not optimistic about anything being changed from this, but hope springs eternal.
Also, I think the trilobite is cute. I have a [real fossilized] one on my desk. My friend stuck a pair of glasses on it, because I'm an old dinosaur, but he wanted to go back even further.
I just saw recently a band called Dutch Interior had Meta AI hallucinate just straight up slander about how their band is linked to White supremacists and far right extremists
Wonderfully absurdist. Reminds me of "I am the SF writer Greg Egan. There are no photos of me on the web.", a placeholder image mindlessly regurgitated all over the internet
Grew up reading Dave's columns, and managed to get ahold of a copy of Big Trouble when I was in the 5th grade. I was probably too young to be reading about chickens being rubbed against women's bare chests and "sex pootie" (whatever that is), but the way we were being propagandized during the early Bush years, his was an extremely welcome voice of absurdity-tinged wisdom, alongside Aaron McGruder's and Gene Weingarten's. Very happy to see his name pop up and that he hasn't missed a beat. And that he's not dead. /Denzel
I also hope that the AI and Google duders understand that this is most people's experience with their products these days. They don't work, and they twist reality in ways that older methods didn't (couldn't, because of the procedural guardrails and direct human input and such). And no amount of spin is going to change this perception - of the stochastic parrots being fundamentally flawed - until they're... you know... not. The sentiment management campaigns aren't that strong just yet.
A few versions of that overview were not incorrect, there actually was another Dave Barry who did die at the time mentioned. Why does this Dave Barry believe he has more of a right to be the one pointed to for the query "What happened to him" when nothing has happened to him but something most certainly did happen to the other Dave Barry (death)?
This reminds me a lot of the special policies Wikipedia has developed through experience about sensitive topics, like biographies of living persons, deaths, etc.
I had a similar experience with meta’s AI. Through their WhatsApp interface I tried for about an hour to get a picture generated. It kept stating everything I asked for correctly but then it never arrived at the picture, actually stayed far from what I asked for and at best getting 70%. This and many other interactions with many LLMs made me realize one thing - once the llm starts hallucinating it’s really tough to steer it away from it. There is no fixing it.
I don’t know if this is a fundamental problem with the llm architecture or a problem with proper prompts.
I love his writing, and this wonderful story illustrates how tired I am of anything AI. I wish there was a way to just block it all, similar to how PiHole blocks ads. I miss the pre-AI (and pre-"social"-network, and pre-advertising-company-owned) internet so much.
Maybe it's the a genuine problem with AI that it can only hold one idea, one possible version of reality at any given time. Though I guess many humans have the same issue. I first heard of this idea from Peter Thiel when he described what he looks for in a founder. It seems increasingly relevant to our social structure that the people and systems who make important decisions are able to hold multiple conflicting ideas without ever fully accepting one or the other. Conflicting ideas create decision paralysis of varying degrees which is useful at times. It seems like an important feature to implement into AI.
It's interesting that LLMs produce each output token as probabilities but it appears that in order to generate the next token (which is itself expressed as a probability), it has to pick a specific word as the last token. It can't just build more probabilities on top of previous probabilities. It has to collapse the previous token probabilities as it goes?
Why we are still calling all this hype "AI" is a mystery to me. There is zero intelligence on it. Zero. It should be called "AK": Artificial Knowledge. And I'm being extremely kind.
“So for now we probably should use it only for tasks where facts are not important, such as writing letters of recommendation and formulating government policy.”
I immediately started thinking about Brazil when I read this, and a future of sprawling bureaucratic AI systems you have to somehow navigate and correct.
I just tried the same thing with my name. Got me confused with someone else who is a touretts syndrom advocate. There was one mention that was correct, but it has my gender wrong. Haha
"for now we probably should use it only for tasks where facts are not important, such as writing letters of recommendation and formulating government policy."
I tend to think of LLMs more like 'thinking' than 'knowing'.
I mean, when you give an LLM good input, it seems to have a good chance of creating a good result. However, when you ask an LLM to retrieve facts, it often fails. And when you look at the inner workings of an LLMs that should not surprise us. After all, they are designed to apply logical relationships between input nodes. However, this is more akin to applying broad concepts than recalling detailed facts.
So if you want LLMs to succeed with their task, provide them with the knowledge they need for their task (or at least the tools to obtain the knowledge themself).
Perhaps I'm missing the joke but I feel sorry for the nice Dave Barry not this arrogant one who genuinely seems to believe he's the only one with the right to that particular name
Death by AI
(davebarry.substack.com)573 points by ano-ther 19 July 2025 | 215 comments
Comments
Vibrant watering hole with drinks & po' boys, as well as a jukebox, pool & electronic darts.
It doesn't serve po' boys, have a jukebox (though the playlists are impeccable), have pool, or have electronic darts. (It also doesn't really have drinks in the way this implies. It's got beer and a few canned options. No cocktails or mixed drinks.)
They got a catty one-star review a month ago for having a misleading description by someone who really wanted to play pool or darts.
I'm sure the owner reported it. I reported it. I imagine other visitors have as well. At least a month on, it's still there.
I had to fight a similar battle with Google Maps, which most people believe to be a source of truth, and it took years until incorrect information was changed. I'm not even sure if it was because of all the feedback I provided.
I see Google as a firehose of information that they spit at me ("feed"), they are too big to be concerned about any inconsistencies, as these don't hurt their business model.
That wouldn't solve the problem of mixing up multiple people. But the first problem most people have is probably actually that it pulls up a person that is more famous than who they were actually looking for.
I think Google does have some type of knowledge graph. I wonder how much AI model uses it.
Maybe it hits the graph, but also some kind of Google search, and then the LLM is like Gemini Flash Lite and is not smart enough to realize which search result goes with the famous person from the graph versus just random info from search results.
I imagine for a lot of names, there are different levels of fame and especially in different categories.
It makes me realize that my knowledge graph application may eventually have an issue with using first and last name as entity IDs. Although it is supposed to be for just an individual's personal info so I can probably mostly get away with it. But I already see a different issue when analyzing emails where my different screen names are not easily recognized as being the same person.
That is such a classic problem with Google (from long before AI).
I am not optimistic about anything being changed from this, but hope springs eternal.
Also, I think the trilobite is cute. I have a [real fossilized] one on my desk. My friend stuck a pair of glasses on it, because I'm an old dinosaur, but he wanted to go back even further.
https://youtube.com/shorts/eT96FbU_a9E?si=johS04spdVBYqyg3
https://www.gregegan.net/images/GregEgan.htm
Too much to ask, surely.
I also hope that the AI and Google duders understand that this is most people's experience with their products these days. They don't work, and they twist reality in ways that older methods didn't (couldn't, because of the procedural guardrails and direct human input and such). And no amount of spin is going to change this perception - of the stochastic parrots being fundamentally flawed - until they're... you know... not. The sentiment management campaigns aren't that strong just yet.
Yes, that's exactly what AI is.
I don’t know if this is a fundamental problem with the llm architecture or a problem with proper prompts.
It's interesting that LLMs produce each output token as probabilities but it appears that in order to generate the next token (which is itself expressed as a probability), it has to pick a specific word as the last token. It can't just build more probabilities on top of previous probabilities. It has to collapse the previous token probabilities as it goes?
In the spirit of social activism, I will take it upon myself to name all of my children Google, even the ones that already have names.
“So for now we probably should use it only for tasks where facts are not important, such as writing letters of recommendation and formulating government policy.”
:-)
Seems to be another Dave Barry who was a political activist that passed away in 2016
This is how "talking to AI" feels like for anything mildly complex.
Reminds me of the toaster in Red Dwarf
https://youtu.be/LRq_SAuQDec?si=vsHyq3YNCCzASkNb
I mean, when you give an LLM good input, it seems to have a good chance of creating a good result. However, when you ask an LLM to retrieve facts, it often fails. And when you look at the inner workings of an LLMs that should not surprise us. After all, they are designed to apply logical relationships between input nodes. However, this is more akin to applying broad concepts than recalling detailed facts.
So if you want LLMs to succeed with their task, provide them with the knowledge they need for their task (or at least the tools to obtain the knowledge themself).
Yet another argument for switching to DuckDuckGo
Personally, if I got a resurrection from it, I would accept the nudge and do the political activism in Dorchester.