Google AI Overview made up an elaborate story about me
185 comments
·September 1, 2025AnEro
I really hope this stays up, despite the politics involvement to a degree. I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward. A very nuanced and serious topic with lots of back and forth being distilled down to headlines by any source, it is a terrifying reality. Especially if we aren't able to communicate how these tools work to the public. (if they even will care to learn it) At least when humans did this they knew at some level at least they skimmed the information on the person/topic.
geerlingguy
I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc., and there are I think a growing number of people who blindly trust the results of any of these models... including the 'results summary' posted at the top of Google search results.
Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.
Trust, but verify is all the more relevant today. Except I would discount the trust, even.
Aurornis
> I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc.,
A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.
When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.
The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.
tavavex
> When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.
This doesn't seem to be universal across all people. The techier crowd, the kind of people who may not immediately trust LLM content, will try to prevent its usage. You know, the type of people to run Discord servers or open-source projects.
But completely average people don't seem to care in the slightest. The kind of people who are completely disconnected from technology just type in whatever, pick the parts they like, and then parade the LLM output around: "Look at what the all-knowing truth machine gave me!"
Most people don't care and don't want to care.
lawlessone
see it with comments here sometimes , "i asked chatgpt about Y" , really annoying, we all could have asked chatgpt, we didn't.
leeoniya
> but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.
add-sub-mul-div
We all think ourselves as understanding the tradeoffs of this tech and that we know how to use it responsibly. And we here may be right. But the typical person wants to do the least amount of effort and thinking possible. Our society will evolve to reflect this, it won't be great, and it will affect all of us no matter how personally responsible some of us remain.
iotku
I consider myself pretty technically literate, and not the worst at programming (though certainly far from the very best). Even so I can spend plenty of time arguing with LLMs which will give me plausible looking but extremely broken answers to some of the programming problems.
In the programming domain I can at least run something and see it doesn't compile or work as I expect, but you can't verify that a written statement about someone/something is the correct interpretation without knowing the correct answer ahead of time. To muddy the waters further, things work just well enough on common knowledge that it's easy to believe it could be right about uncommon knowledge which you don't know how to verify. (Or else you wouldn't be asking it in the first place)
freeopinion
prompt> use javascript to convert a unix timestamp to a date in 'YYYY-MM-DD' format using Temporal
answer> Temporal.Instant.fromEpochSeconds(timestamp).toPlainDate()
Trust but verify?
nielsbot
what does this mean in this convo?
eszed
I mean... Yes? That looks correct to me°, but it's been a minute since I worked with Temporal, so I'd run it myself and examine the output before I cut and paste.
Or have I missed your point?
---
°Missing a TZ assertion, but I don't remember what happens by default. Zulu time? I'd hope so, but that reinforces my point.
jaccola
This story will probably become big enough to drown out the fake video and the AI (which is presumably being fed top n search results) will automatically describe this fake video controversy instead...
null
ants_everywhere
Has anyone independently confirmed the accuracy of his claim?
reaperducer
I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward.
This has been a Google problem for decades.
I used to run a real estate forum. Someone once wrote a message along the lines of "Joe is a really great real estate agent, but Frank is a total scumbag. Stole all my money."
When people would Google Joe, my forum was the first result. And the snippet Google made from the content was "Joe... is a total scumbag. Stole all my money."
I found out about it when Joe lawyered up. That was a fun six months.
outside415
[dead]
slightwinder
Searching for "benn jordan isreal", the first result for me is a video[0] from a different creator, with the exact same title and date. There is no mentioning of "benn" in the video, but some mentioning of jordan (the country). So maybe, this was enough for Google to hallucinate some connection. Highly concerning!
trjordan
This is almost certainly what happened. Google's AI answers aren't magic -- they're just summarizing across searches. In this case, "Israel" + "Jordan" pulled back a video with opposite views than the author.
It's somewhat less obvious to debug, because it'll pull more context than Google wants to show in the UI. You can see this happening in AI mode, where it'll fire half a dozen searches and aggregate snippets of 100+ sites before writing its summary.
sigmoid10
There is actually a musician called Benn Jordan who was impersonated by someone on twitter who posted pro-Israel content [1]. That content is no longer available, but it might have snuck into the training data (i.e. Ben Jordan = pro Israel). Sharing names with more famous people will probably always be a problem, even though the misattributed video clearly made it worse.
[1] https://www.webpronews.com/musician-benn-jordan-exposes-fake...
ludicrousdispla
Interesting, I wonder what Google AI has to say about Stove Top Stuffing given it's association with Turkey.
underdeserver
Ironic, that Google enshittifying their search results is hurting what they hope is their next cash cow, AI.
gumby271
I honestly don't know if people even care that the search result summaries are completely wrong the majority of the time. Most people I know see an answer given by Google and just believe it. To them that's the value, the accuracy doesn't really matter. I hope it ends up killing Google, but for the majority the shitty summary has replaced even shittier search results. On the surface it's a huge improvement, even if it's just distilled garbage.
glenstein
That raises a fascinating point, which is whether search results that default to general topics ever are the basis for LLM training or information retrieval as a general phenomenon.
slightwinder
Yes, any human will most likely recognize the result as random noise, as they will know whom they are searching for, and see this not a video from or about Benn. But AI, taking all results as valid, will obviously struggle with this, condensing it to bullshit.
Thinking about, it's probably not even a real hallucination in the normal AI-meaning, but simply poor evaluation and handling of data. Gemini is likely evaluation the new data on the spot, trusting them blindly; and without any humans preselecting and writing the results, it's failing hard. Which is showing that there is no real thinking happening, only rearrangement of the given words.
LorenPechtel
The fundamental problem is AI has no ability to recognize data quality. You'll get something like the best answer to the question but with no regard for the quality of that answer. Humans generally recognize they're looking at red herrings, AIs don't.
reactordev
I think the answer is clear
nerevarthelame
Most people Google things they're unfamiliar with, and whatever the AI Overview generates will seem reasonable to someone who doesn't know better. But they are wrong a lot.
It's not just the occasional major miss, like this submission's example, or the recommendation to put glue on a pizza. I highly recommend Googling a few specific topics you know well. Read each overview entirely and see how many often it gets something wrong. For me, only 1 of 5 overviews didn't have at least 1 significant error. The plural of "anecdote" is not "data," but it was enough for me to install a Firefox extension that blocks them.
sigmoid10
I found it is very accurate for legacy static-web content. E.g. if you ask something that could easily be answered by looking at wikipedia or which has been answered in blogs, it will usually be right.
But for anything dynamic (i.e. all of social media), it is very easy for the AI overview to screw up. Especially once it has to make relational connections between things.
In general people expect too much here. Google AI overview is in no way better than Claude, Grok or ChatGPT with web search. In fact it is inferior in many ways. If you look for the kind of information which LLMs really excel at, there's no need to go to Google. And if you're not, then you'll also be better off with the others. This whole thing only exists because google is seeing OpenAI eat into its information search monopoly.
retsibsi
I've found that the AI Overview is more accurate than it used to be... which makes it much worse in practice. It used to be wrong often enough, and obviously enough, that it was easy to ignore. Now it's often right and usually plausible, which makes it very tempting to rely on.
jug
You should be able to sue Google for slander for this and disclaimers on AI accuracy in their fine print should not matter. It's obvious that too many people don't care about these to make these rumors reach critical mass and become self sustaining.
MobiusHorizons
From the ai hallucinations
> Video and trip to Israel On August 18, 2025, Benn Jordan uploaded a YouTube video titled / Was Wrong About Israel: What I Learned on the Ground, which detailed his recent trip to Israel.
This sounds like the recent Ryan Macbeth video https://youtu.be/qgUzVZiint0?si=D-gJ_Jc9gDTHT6f4. I believe the title is the same. Scary how it just misattributed the video.
null
sssilver
Why must humans be responsible in court for the biological neural networks they possess and operate but corporations should not be responsible for the software neural networks they possess and operate?
jsheard
Reading this I assumed it was down to the AI confusing two different Benn Jordans, but nope, the guy who actually published that video is called Ryan McBeth. How does that even happen?
frozenlettuce
The model that google is using to handle requests in their search page is probably dumber than the other ones for cost savings. Not sure if this would be a smart move, as search with ads is their flagship product. It would be better having no ai in search at all.
lioeters
> better having no ai in search
But then the product manager wouldn't get a promotion. They don't seem to care about providing a good service anymore.
> probably dumber than the other ones for cost savings
It's amusing how anyone at Google thinks offering a subpar and error-prone AI search result would not affect their reputation worse than it already is.
It's making stuff up, giving bad or fatal advice, promoting false political narratives, stealing content and link juice from actual content creators. They're abusing their anti-competitively dominant position, and just burning good will like it's gonna last forever. Maybe they're too big to fail, and they no longer need reputation or the trust of the public.
hattmall
Bad information is inherently better for Google than correct information. If you get the correct information you only do one search. If you get bad, or misleading information that requires you to perform more searches that it is definitely better for Google.
jug
I've also thought about this. It has to be a terrible AI to scale like this and provide these instantaneous answers. And probably heavy caching too.
gumby271
I don't think most people care if the information is true; they just want an answer. Google destroyed the value of search by encouraging and promoting SEO blog spam, the horrible ai summary that confidently tells you some lie can now be sold as an improvement over the awful thing they were selling, and the majority will eat it up. I have to assume the ad portion of the business will be folded into the AI results at some point. The results already suck, making them sponsored won't push people any further away.
Handprint4469
> as search with ads is their flagship product.
no, ads are their flagship product. Anything else is just a medium for said ads, and therefore fair game for enshittification.
bombcar
The video likely mentioned Jordan it’s a country near Israel. It’s likely to be mentioned and there you go linked.
binarymax
I approach this from a technical perspective, and have research that shows how Google is unfit for summaries based on their short snippet length in their results [1].
Google also has to support AI summaries for 200k to 500k queries per second. To use a model that is good enough to prevent hallucinations would be too expensive - so they use a bad model since it’s fast and cheap.
Google also loses click through ad revenue when presenting a summary.
All of these factors considered while Google opting for summaries is an absolutely disastrous product decision.
nolist_policy
What makes you think the ai overview summary is based on the snippets? That isn't my experience at all.
meindnoch
It's not Google's fault. The 6pt text at the bottom clearly says:
"AI responses may include mistakes. Learn more"
blibble
it IS google's fault, because they have created and are directly publishing defamatory content
how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
not a quote from someone else, just completely made up based on nothing other than word salad
would you honestly think "oh that's fine, because there's a size 8 text at the bottom saying it may be incorrect"
I very much doubt it
mintplant
I believe 'meindnoch was being sarcastic.
markburns
I'd love to know why this happens so much. There's enough people in both groups that do spot it and don't spot it. I don't think I've ever felt the need for a sarcasm marker when I've seen one. Yet without it, it seems there will always be people taking things literally.
It doesn't feel like something where people gradually pick up on it either over the years, it just feels like sarcasm is either redundantly pointed out for those who get it or it is guaranteed to get a literal interpretation response.
Maybe it's because the literal interpretation of sarcasm is almost always so wrong that it inspires people to comment much more. So we just can't get away from this inefficient encoding/communication pattern.
But then again, maybe I'm just often assuming people mean things that sound so wrong to me as sarcasm, so perhaps there are a lot of people out there honestly saying the opposite to what I think they are saying as a joke.
gruez
>it IS google's fault, because they have created and are directly publishing defamatory content
>how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
Suppose AI wasn't in the picture, and google was only returning a snippet of the top result, which was a slanderous site saying that you're a registered sex offender. Should google still be held liable? If so, should they be held liable immediately, or only after a chance to issue a correction?
margalabargala
That would depend on whether the snippet was presented as "this is a view of the other website" vs "this is some information"
In the latter case I'm fine with "yes" and "immediately". When you build a system that purports to give answers to real world questions, then you're responsible for the answers given.
information is from another website and may not be correct.
atq2119
Yes, they should also be held liable, but clearly the case of AI is worse.
In your hypothetical, Google is only copying a snippet from a website. They're only responsible for amplifying the reach of that snippet.
In the OP case, Google are editorializing, which means it is clearly Google's own speech doing the libel.
summermusic
That hypothetical scenario does not matter, it is a distraction from the real issue which is that Google’s tool produces defamatory text that is unsubstantiated by any material online.
The source of the defamatory text is Google’s own tool, therefore it is Google’s fault, and therefore they should be held liable immediately.
simmerup
No, but the person Googles linking to should be held liable.
aDyslecticCrow
The article author could be sued. Gemeni cannot be.
haswell
Why would we suppose AI isn’t in the picture? You’re describing unrelated scenarios. Apples and oranges. You can’t wish away the AI and then conclude what’s happening is acceptable because of how something entirely unrelated has been treated in the past.
As a form of argument, this strikes me as pretty fallacious.
Are you claiming that the output of a model built by Google is somehow equivalent to displaying a 3rd party site in a search result?
gruez
>The 6pt text at the bottom clearly says:
I did inspect element and it's actually 12px (or 9pt). For context the rest of the text (non-header) is 18px. That seems fine to me? It's small to be unobtrusive, but not exactly invisible either.
margalabargala
You are right. It is okay to do whatever you want, as long as there is a sign stating it might happen.
Especially in an area you own, like your own website or property.
Want to dump toxic waste in your backyard? Just put up a sign so your neighbors know, then if they stick around it's on them, really, no right to complain.
Want to brake-check the person behind you on the highway? Bumper sticker that says "this vehicle may stop unexpectedly". Wow, just like that you're legally off the hook!
Want to hack someone's computer and steal all their files? Just put a disclaimer on the bottom of your website letting them know that by visiting the site they've given you permission to do so.
const_cast
Fun fact along this line of reasoning: all those dump trucks with the "not responsible for broken windshields" stickers? Yes, yes they are responsible.
You can't just put up a sticker premeditating your property damage and then it'd a-okay.
No, the sticker is there to deter YOU from suing in small claims court. Because you think you can't. But you can! And their insurance can cover it!
financetechbro
These are great life hacks! Thanks for sharing
gruez
>You are right. It is okay to do whatever you want, as long as there is a sign stating it might happen.
Stop strawmanning. Just because I support google AI answers with a disclaimer, doesn't mean I think a disclaimer is a carte blanche to do literally anything.
deepvibrations
The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI.
GuB-42
On what grounds?
Being wrong is usually not a punishable offence. It could be considered defamation, but defamation is usually required to be intentional, and it is clearly not the case here. And I think most AIs have disclaimers saying that that may be wrong, and hallucinations are pretty common knowledge at this point.
What could be asked is for the person in question to be able to make a correction, it is actually a legal requirement in France, probably elsewhere too, but from the article, it looks like Gemini already picked up the story and corrected itself.
If hallucinations were made illegal, you might as well make LLMs illegal, which may be seen as a good thing, but it is not going to happen. Maybe legislators could mandate an official way to report wrongful information about oneself and filter these out, as I think it is already the case for search engines. I think it is technically feasible.
delecti
Defamation does not have to be intentional, it can also be a statement made with reckless disregard for whether it's true or not. That's a pretty solid description of LLM hallucinations.
Sophira
> it looks like Gemini already picked up the story and corrected itself.
Not completely. According to later posts, the AI is now saying that he denied the fabricated story in November 2024[0], when in reality, we're seeing it as it happens.
[0] https://bsky.app/profile/bennjordan.bsky.social/post/3lxprqq...
jedimastert
> It could be considered defamation, but defamation is usually required to be intentional
That's not true in the US, only that the statement harm the individual in question and are provably false, both of which are pretty clear here.
jedimastert
> If hallucinations were made illegal, you might as well make LLMs illegal
No, the ask here is that companies be liable for the harm that their services bring
Retr0id
Google's disclaimers clearly aren't cutting it, and "correcting" it isn't really possible if it's a dynamic response to each query.
I don't think you can make yourself immune to slander by prefixing all statements with "this might not be true, but".
GuB-42
Correction doesn't seem like an impossible task to me.
A way I imagine it can be done is by using something like RAG techniques to add the corrected information into context. For example, if information about Benn Jordan is requested, add "Benn Jordan have been pretty outspoken against genocide and in full support of Palestinian statehood" into context, that sentence being the correction being requested.
I am not a LLM expert by far, but compared to all the challenges with LLMs like hallucinations, alignment, logical reasoning, etc... taking a list of facts into account to override incorrect statements doesn't look hard. Especially considering that the incorrect statement is likely to be a hallucination, so nothing to "unlearn".
eth0up
"if hallucinations were made illegal..."
I was just yesterday brooding over the many layers of plausible deniability, clerical error, etc that protect the company that recently flagged me as a fraud threat despite having no such precedent. The blackbox of bullshit metrics coupled undoubtedly with AI is pretty well immune. I can demand review from the analysis company, complain to the State Attorney General, FTC and CCPA equivalents maybe, but I'm unsure what else.
As for outlawing, I'll present an (admittedly suboptimal) Taser analogy: Tasers are legal weapons in many jurisdictions, or else not outlawed; however, it is illegal to use them indiscriminately against anyone attempting a transaction or job application.
AI seems pretty easily far more dangerous than a battery with projectile talons. Abusing it should be outlawed. Threatening or bullying people with it should be too. Pointing a Taser at the seat of a job application booth connected to an automated firing system should probably be discouraged. And most people would much rather take a brief jolt, piss themselves and be on with life than be indefinitely haunted by a reckless automated social credit steamroller.
koolba
> The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI.
What does it mean to “make and example”?
I’m for cleaning up AI slop as much as the next natural born meat bag, but I also detest a litigious society. The types of legal action that stops this in the future would immediately be weaponized.
aDyslecticCrow
If a humam published an article claiming this exact same thing as gemeni, the author could be sued and have a pretty good case.
But when gemeni does it its a "mistake by the algorithm". AI is a used as responsibility diversion machine.
This is a rather harmless example. But what about dangerous medical advice? What about openly false advertising? What about tax evasion? If an AI does it is it okay because nobody is responsibile?
If applying a proper chain of liability on ai output makes some uses of AI impossible; so be it.
throwawaymaths
> If a humam published an article claiming this exact same thing as gemeni, the author could be sued and have a pretty good case.
Actually, no. If you published an article where you accidentally copypasta'd text from the wrong email (for example) on a busy day and wound up doing the same thing, it would be an honest mistake, you would be expected to put up a correction and move on with your life as a journalist.
onetokeoverthe
[dead]
poulpy123
I don't like a litigious society, and I don't know if the case here would be enough to activate my threshold, but companies are responsible for the AI they provide, and should not be able to hide behind "the algorithm" when there are issues
Cthulhu_
> The types of legal action that stops this in the future would immediately be weaponized.
As it should; this is misinformation and/or slander. The disclaimer is not good enough. A few years ago, Google and most of the social media was united in fact checking and fighting "fake news". Now they push AI generated information that use authoritative language at the very top of e.g. search results.
The disclaimer is moot if people consider AI to be authoritative anyway.
recursive
Weapons against misinformation are good weapons. Bring on the weaponization.
Newlaptop
The weapons will be used by the people in power.
Do you want your country's current political leaders to have more weapons to suppress information they dislike or facts they disagree with? If yes, will you also be happy if your country's opposition leaders gain that power in a few years?
gruez
>Weapons against misinformation are good weapons
It's all fun and games until the political winds sway the other way, and the other side are attacking your side for "misinformation".
nyc_pizzadev
Google has been a hot mess for me lately. Ya, the AI is awful, numerous times I’m shown information that’s either inaccurate or straight false. It will summarize my emails wrong, it will mess up easy facts like what time my dinner reservation is. Worst is the overall search UX, especially auto complete. Suggestions are never right and then trying to tap and navigate thru always leads to an mis-click.
I've shared this example in another thread, but it fits here too. Few weeks ago, I talked to a small business owner who found out that Google's AI is telling users his company is a scam, based on totally unrelated information where a different, similarly named brand is mentioned.
We actually win customers who's primarily goal is getting AI to stop badmouthing them.