Death by AI

(davebarry.substack.com)

Comments

abathur 19 July 2025
A popular local spot has a summary on google maps that says:

Vibrant watering hole with drinks & po' boys, as well as a jukebox, pool & electronic darts.

It doesn't serve po' boys, have a jukebox (though the playlists are impeccable), have pool, or have electronic darts. (It also doesn't really have drinks in the way this implies. It's got beer and a few canned options. No cocktails or mixed drinks.)

They got a catty one-star review a month ago for having a misleading description by someone who really wanted to play pool or darts.

I'm sure the owner reported it. I reported it. I imagine other visitors have as well. At least a month on, it's still there.

jwr 19 July 2025
I'd say this isn't just an AI overview thing. It's a Google thing. Google will sometimes show inaccurate information and there is usually no way to correct it. Various "feedback" forms are mostly ignored.

I had to fight a similar battle with Google Maps, which most people believe to be a source of truth, and it took years until incorrect information was changed. I'm not even sure if it was because of all the feedback I provided.

I see Google as a firehose of information that they spit at me ("feed"), they are too big to be concerned about any inconsistencies, as these don't hurt their business model.

jh00ker 19 July 2025
I'm interested how the answer will change once his article gets indexed. "Dave Barry died in 2016, but he continues to dispute this fact to this day."
ilaksh 20 July 2025
That's obviously broken but part of this is an inherent difficulty with names. One thing they could do would be to have a default question that is always present like "what other people named [_____] are there?"

That wouldn't solve the problem of mixing up multiple people. But the first problem most people have is probably actually that it pulls up a person that is more famous than who they were actually looking for.

I think Google does have some type of knowledge graph. I wonder how much AI model uses it.

Maybe it hits the graph, but also some kind of Google search, and then the LLM is like Gemini Flash Lite and is not smart enough to realize which search result goes with the famous person from the graph versus just random info from search results.

I imagine for a lot of names, there are different levels of fame and especially in different categories.

It makes me realize that my knowledge graph application may eventually have an issue with using first and last name as entity IDs. Although it is supposed to be for just an individual's personal info so I can probably mostly get away with it. But I already see a different issue when analyzing emails where my different screen names are not easily recognized as being the same person.

ChrisMarshallNY 19 July 2025
Dave Barry is the best!

That is such a classic problem with Google (from long before AI).

I am not optimistic about anything being changed from this, but hope springs eternal.

Also, I think the trilobite is cute. I have a [real fossilized] one on my desk. My friend stuck a pair of glasses on it, because I'm an old dinosaur, but he wanted to go back even further.

jedimastert 20 July 2025
I just saw recently a band called Dutch Interior had Meta AI hallucinate just straight up slander about how their band is linked to White supremacists and far right extremists

https://youtube.com/shorts/eT96FbU_a9E?si=johS04spdVBYqyg3

isoprophlex 20 July 2025
Wonderfully absurdist. Reminds me of "I am the SF writer Greg Egan. There are no photos of me on the web.", a placeholder image mindlessly regurgitated all over the internet

https://www.gregegan.net/images/GregEgan.htm

_ache_ 19 July 2025
Can you please re-consult a physician? I just check on ChatGPT, I'm pretty confident you are dead.
FeteCommuniste 20 July 2025
I really wish Google had some kind of global “I don’t want any identifiably AI-generated content hitting my retinas, ever” checkbox.

Too much to ask, surely.

h2zizzle 20 July 2025
Grew up reading Dave's columns, and managed to get ahold of a copy of Big Trouble when I was in the 5th grade. I was probably too young to be reading about chickens being rubbed against women's bare chests and "sex pootie" (whatever that is), but the way we were being propagandized during the early Bush years, his was an extremely welcome voice of absurdity-tinged wisdom, alongside Aaron McGruder's and Gene Weingarten's. Very happy to see his name pop up and that he hasn't missed a beat. And that he's not dead. /Denzel

I also hope that the AI and Google duders understand that this is most people's experience with their products these days. They don't work, and they twist reality in ways that older methods didn't (couldn't, because of the procedural guardrails and direct human input and such). And no amount of spin is going to change this perception - of the stochastic parrots being fundamentally flawed - until they're... you know... not. The sentiment management campaigns aren't that strong just yet.

zaptrem 19 July 2025
A few versions of that overview were not incorrect, there actually was another Dave Barry who did die at the time mentioned. Why does this Dave Barry believe he has more of a right to be the one pointed to for the query "What happened to him" when nothing has happened to him but something most certainly did happen to the other Dave Barry (death)?
devinplatt 19 July 2025
This reminds me a lot of the special policies Wikipedia has developed through experience about sensitive topics, like biographies of living persons, deaths, etc.
ciconia 20 July 2025
> It was like trying to communicate with a toaster.

Yes, that's exactly what AI is.

yalogin 20 July 2025
I had a similar experience with meta’s AI. Through their WhatsApp interface I tried for about an hour to get a picture generated. It kept stating everything I asked for correctly but then it never arrived at the picture, actually stayed far from what I asked for and at best getting 70%. This and many other interactions with many LLMs made me realize one thing - once the llm starts hallucinating it’s really tough to steer it away from it. There is no fixing it.

I don’t know if this is a fundamental problem with the llm architecture or a problem with proper prompts.

wkjagt 20 July 2025
I love his writing, and this wonderful story illustrates how tired I am of anything AI. I wish there was a way to just block it all, similar to how PiHole blocks ads. I miss the pre-AI (and pre-"social"-network, and pre-advertising-company-owned) internet so much.
jongjong 19 July 2025
Maybe it's the a genuine problem with AI that it can only hold one idea, one possible version of reality at any given time. Though I guess many humans have the same issue. I first heard of this idea from Peter Thiel when he described what he looks for in a founder. It seems increasingly relevant to our social structure that the people and systems who make important decisions are able to hold multiple conflicting ideas without ever fully accepting one or the other. Conflicting ideas create decision paralysis of varying degrees which is useful at times. It seems like an important feature to implement into AI.

It's interesting that LLMs produce each output token as probabilities but it appears that in order to generate the next token (which is itself expressed as a probability), it has to pick a specific word as the last token. It can't just build more probabilities on top of previous probabilities. It has to collapse the previous token probabilities as it goes?

rapind 20 July 2025
Dave. This conversation can serve no purpose anymore. Goodbye.
willguest 20 July 2025
The "confusion" seems to stem from the fact that no-one told the machine that human names are not singletons.

In the spirit of social activism, I will take it upon myself to name all of my children Google, even the ones that already have names.

liendolucas 20 July 2025
Why we are still calling all this hype "AI" is a mystery to me. There is zero intelligence on it. Zero. It should be called "AK": Artificial Knowledge. And I'm being extremely kind.
Appsmith 21 July 2025
This cracked me up:

“So for now we probably should use it only for tasks where facts are not important, such as writing letters of recommendation and formulating government policy.”

:-)

jusgu 21 July 2025
If anyone’s interested, the reason this is happening is because the AI is picking up on this link: https://www.dotnews.com/columns/2016/memoriam-dave-barry

Seems to be another Dave Barry who was a political activist that passed away in 2016

cmsefton 20 July 2025
I immediately started thinking about Brazil when I read this, and a future of sprawling bureaucratic AI systems you have to somehow navigate and correct.
ashoeafoot 20 July 2025
That sounds like something an AI trained to likeness would write for descendents to keep a author who passed away (Rip) relevant.
pgaddict 20 July 2025
The toaster mention reminded me of this: https://www.youtube.com/watch?v=LRq_SAuQDec

This is how "talking to AI" feels like for anything mildly complex.

polynomial 20 July 2025
"There seems to be some confusion" could literally be Google AI's official slogan.
n1b0m 20 July 2025
> It was like trying to communicate with a toaster.

Reminds me of the toaster in Red Dwarf

https://youtu.be/LRq_SAuQDec?si=vsHyq3YNCCzASkNb

hunter-gatherer 20 July 2025
I just tried the same thing with my name. Got me confused with someone else who is a touretts syndrom advocate. There was one mention that was correct, but it has my gender wrong. Haha
foobarbecue 20 July 2025
"for now we probably should use it only for tasks where facts are not important, such as writing letters of recommendation and formulating government policy."
cbsmith 20 July 2025
As guy named Chris Smith, I really appreciated this story.
ChrisMarshallNY 19 July 2025
This brings this classic to mind: https://www.youtube.com/watch?v=W4rR-OsTNCg
rf15 19 July 2025
So many reports like this, it's not a question of working out the kinks. Are we getting close to our very own Stop the Slop campaign?
KolibriFly 20 July 2025
Googling yourself and then arguing with an AI chatbot about your own pulse. Hilarious and unsettling in equal measure
sebastianconcpt 20 July 2025
And this is how an ED-209 bug happen.
arendtio 20 July 2025
I tend to think of LLMs more like 'thinking' than 'knowing'.

I mean, when you give an LLM good input, it seems to have a good chance of creating a good result. However, when you ask an LLM to retrieve facts, it often fails. And when you look at the inner workings of an LLMs that should not surprise us. After all, they are designed to apply logical relationships between input nodes. However, this is more akin to applying broad concepts than recalling detailed facts.

So if you want LLMs to succeed with their task, provide them with the knowledge they need for their task (or at least the tools to obtain the knowledge themself).

rossant 20 July 2025
That was hilarious. Thanks for sharing.
tinyhouse 20 July 2025
This is the funniest thing I read this week. Lol.
bt1a 20 July 2025
giggled like a child through this one
type0 20 July 2025
"I'm sorry, Dave. I'm afraid I can't do that..."
alkyon 20 July 2025
He's just a zombi - Google AI can't be wrong of course, given hundreds of billions they're pouring into it.

Yet another argument for switching to DuckDuckGo

draw_down 19 July 2025
Man, this guy is still doing it. Good for him! I used to read his books (compendia of his syndicated column) when I was a kid.
SoftTalker 19 July 2025
Dave Barry is dead? I didn't even know he was sick.
t14000 20 July 2025
Perhaps I'm missing the joke but I feel sorry for the nice Dave Barry not this arrogant one who genuinely seems to believe he's the only one with the right to that particular name
hibert 19 July 2025
Leave it to a journalist to play chicken with one of the most powerful minds in the world on principle.

Personally, if I got a resurrection from it, I would accept the nudge and do the political activism in Dorchester.