Death by AI
32 comments
·July 19, 2025zaptrem
A few versions of that overview were not incorrect, there actually was another Dave Barry who did die at the time mentioned. Why does this Dave Barry believe he has more of a right to be the one pointed to for the query "What happened to him" when nothing has happened to him but something most certainly did happen to the other Dave Barry (death)?
alexmorley
Even those versions could well have been interleaved with other AI summaries about Dave Barry that referred to OP without disambiguating which was about who.
Be ideal if it did disambiguate a la Wikipedia.
jwr
I'd say this isn't just an AI overview thing. It's a Google thing. Google will sometimes show inaccurate information and there is usually no way to correct it. Various "feedback" forms are mostly ignored.
I had to fight a similar battle with Google Maps, which most people believe to be a source of truth, and it took years until incorrect information was changed. I'm not even sure if it was because of all the feedback I provided.
I see Google as a firehose of information that they spit at me ("feed"), they are too big to be concerned about any inconsistencies, as these don't hurt their business model.
muglug
No, this is very much an AI overview thing. In the beginning Google put the most likely-to-match-your-query result at the top, and you could click the link to see whether it answered your question.
Now, frequently, the AI summaries are on top. The AI summary LLM is clearly a very fast, very dumb LLM that’s cheap enough to run on webpage text for every search result.
That was a product decision, and a very bad one. Currently a search for "Suicide Squad" yields
> The phrase "suide side squad" appears to be a misspelling of "Suicide Squad"
hughw
Well it was accurate if you were asking about the Dave Barry in Dorchester.
omnicognate
He won a Pulitzer too? Small world.
null
o11c
I remember when the biggest gripe I had with Google was that when I searched for Java documentation (by class name), it defaulted to showing me the version for 1.4 instead of 6.
jh00ker
I'm interested how the answer will change once his article gets indexed. "Dave Barry died in 2016, but he continues to dispute this fact to this day."
ChrisMarshallNY
Dave Barry is the best!
That is such a classic problem with Google (from long before AI).
I am not optimistic about anything being changed from this, but hope springs eternal.
Also, I think the trilobite is cute. I have a [real fossilized] one on my desk. My friend stuck a pair of glasses on it, because I'm an old dinosaur, but he wanted to go back even further.
_ache_
Can you please re-consult a physician? I just check on ChatGPT, I'm pretty confident you are dead.
ChrisMarshallNY
This brings this classic to mind: https://www.youtube.com/watch?v=W4rR-OsTNCg
devinplatt
This reminds me a lot of the special policies Wikipedia has developed through experience about sensitive topics, like biographies of living persons, deaths, etc.
eloeffler
I know one story that may have become such an experience. It's about Wikipedia Germany and I don't know what the policies there actually are.
A German 90s/2000s rapper (Textor, MC of Kinderzimmer Productions) produced a radio feature about facts and how hard it can be to prove them.
One personal example he added was about his Wikipedia Article that stated that his mother used to be a famous jazz singer in her birth country Sweden. Except she never was. The story had been added to an Album recension in a rap magazine years before the article was written. Textor explains that this is part of 'realness' in rap, which has little to do with facts and more with attitude.
When they approached Wikipedia Germany, it was very difficult to change this 'fact' about the biography of his mother. There was published information about her in a newspaper and she could not immediately prove who she was. Unfortunately, Textor didn't finish the story and moved on to the next topic in the radio feature.
pyman
I'm worried about this. Companies like Wikipedia spent years trying to get things right, and now suddenly Google and Microsoft (including OpenAI) are using GenAI to generate content that, frankly, can't be trusted because it's often made up.
That's deeply concerning, especially when these two companies control almost all the content we access through their search engines, browsers and LLMs.
This needs to be regulated outside the US [0]. These companies should be held accountable for spreading false information or rumours, as it can have unexpected consequences.
[0] I say outside because in the US, big tech controls the politicians.
Aurornis
> This needs to be regulated. They should be held accountable for spreading false information or rumours,
Regulated how? Held accountable how? If we start fining LLM operators for pieces of incorrect information you might as well stop serving the LLM to that country.
> since it can have unexpected consequences
Generally you hold the person who takes action accountable. Claiming an LLM told you bad information isn’t any more of a defense than claiming you saw the bad information on a Tweet or Reddit comment. The person taking action and causing the consequences has ownership of their actions.
I recall the same hand-wringing over early search engines: There was a debate about search engines indexing bad information and calls for holding them accountable for indexing incorrect results. Same reasoning: There could be consequences. The outrage died out as people realize they were tools to be used with caution, not fact-checked and carefully curated encyclopedias.
> I'm worried about this. Companies like Wikipedia spent years trying to get things right,
Would you also endorse the same regulations against Wikipedia? Wikipedia gets fined every time incorrect information is found on the website?
EDIT: Parent comment was edited while I was replying to add the comment about outside of the US. I welcome some country to try regulating LLMs to hold them accountable for inaccurate results so we have some precedent for how bad of an idea that would be and how much the citizens would switch to using VPNs to access the LLM providers that are turned off for their country in response.
blibble
> If we start fining LLM operators for pieces of incorrect information you might as well stop serving the LLM to that country.
sounds good to me?
pyman
If Google accidentally generates an article claiming a politician in XYZ country is corrupt the day before an election, then quietly corrects it after the election, should we NOT hold them accountable?
Other companies have been fined for misleading customers [0] after a product launch. So why make an exception for Big Tech outside the US?
And why is the EU the only bloc actively fining US Big Tech? We need China, Asia and South America to follow their lead.
[0] https://en.m.wikipedia.org/wiki/Volkswagen_emissions_scandal
jongjong
Maybe it's the a genuine problem with AI that it can only hold one idea, one possible version of reality at any given time. Though I guess many humans have the same issue. I first heard of this idea from Peter Thiel when he described what he looks for in a founder. It seems increasingly relevant to our social structure that the people and systems who make important decisions are able to hold multiple conflicting ideas without ever fully accepting one or the other. Conflicting ideas create decision paralysis of varying degrees which is useful at times. It seems like an important feature to implement into AI.
It's interesting that LLMs produce each output token as probabilities but it appears that in order to generate the next token (which is itself expressed as a probability), it has to pick a specific word as the last token. It can't just build more probabilities on top of previous probabilities. It has to collapse the previous token probabilities as it goes?
herval
I'm not sure that's the case, and it's quite easily proven - if you ask an LLM any question, then doubt their response, they'll change their minds and offer a different interpretation. It's an indication they hold multiple interpretations, depending on how you ask, otherwise they'd dig in.
You can also see decision paralysis in action if you implement CoT - it's common to see the model "pondering" about a bunch of possible options before picking one.
rf15
So many reports like this, it's not a question of working out the kinks. Are we getting close to our very own Stop the Slop campaign?
randcraw
Yeah, after daily working with AI for a decade in a domain where it _does_ work predictably and reliably (image analysis), I continue to be amazed how many of us continue to trust LLM-based text output as being useful. If any human source got their facts wrong this often, we'd surely dismiss them as a counterproductive imbecile.
Or elect them President.
BobbyTables2
HAL 9000 in 2028!
locallost
I am beginning to wonder why I use it, but the idea of it is so tempting. Try to google it and get stuck because it's difficult to find, or ask and get an instant response. It's not hard to guess which one is more inviting, but it ends up being a huge time sink anyway.
trod1234
Regulation with active enforcement is the only civil way.
The whole point of regulation is for when the profit motive forces companies towards destructive ends for the majority of society. The companies are legally obligated to seek profit above all else, absent regulation.
Aurornis
> Regulation with active enforcement is the only civil way.
What regulation? What enforcement?
These terms are useless without details. Are we going to fine LLM providers every time their output is wrong? That’s the kind of proposition that sounds good as a passing angry comment but obviously has zero chance of becoming a real regulation.
Any country who instituted a regulation like that would see all of the LLM advancements and research instantly leave and move to other countries. People who use LLMs would sign up for VPNs and carry on with their lives.
trod1234
Regulations exist to override profit motive when corporations are unable to police themselves.
Enforcement ensures accountability.
Fines don't do much in a fiat money-printing environment.
Enforcement is accountability, the kind that stakeholders pay attention to.
Something appropriate would be where if AI was used in a safety-critical or life-sustaining environment and harm or loss was caused; those who chose to use it are guilty until they prove they are innocent I think would be sufficient, not just civil but also criminal; where that person and decision must be documented ahead of time.
> Any country who instituted a regulation like that would see all of the LLM advances and research instantly leave and move to other countries.
This is fallacy. Its a spectrum, research would still occur, it would be tempered by the law and accountability, instead of the wild-west where its much more profitable to destroy everything through chaos. Chaos is quite profitable until it spread systemically and ends everything.
AI integration at a point where it can impact the operation of nuclear power plants through interference (perceptual or otherwise) is just asking for a short path to extinction.
Its quite reasonable that the needs for national security trump private business making profit in a destructive way.
draw_down
Man, this guy is still doing it. Good for him! I used to read his books (compendia of his syndicated column) when I was a kid.
SoftTalker
Dave Barry is dead? I didn't even know he was sick.
A popular local spot has a summary on google maps that says:
Vibrant watering hole with drinks & po' boys, as well as a jukebox, pool & electronic darts.
It doesn't serve po' boys, have a jukebox (though the playlists are impeccable), have pool, or have electronic darts. (It also doesn't really have drinks in the way this implies. It's got beer and a few canned options. No cocktails or mixed drinks.)
They got a catty one-star review a month ago for having a misleading description by someone who really wanted to play pool or darts.
I'm sure the owner reported it. I reported it. I imagine other visitors have as well. At least a month on, it's still there.