Skip to content(if available)orjump to list(if available)

AI Hallucination Cases Database

AI Hallucination Cases Database

36 comments

·May 25, 2025

irrational

I still think confabulation is a better term for what LLMs do than hallucination.

Hallucination - A hallucination is a false perception where a person senses something that isn't actually there, affecting any of the five senses: sight, sound, smell, touch, or taste. These experiences can seem very real to the person experiencing them, even though they are not based on external stimuli.

Confabulation - Confabulation is a memory error consisting of the production of fabricated, distorted, or misinterpreted memories about oneself or the world. It is generally associated with certain types of brain damage or a specific subset of dementias.

bluefirebrand

You're not wrong in a strict sense, but you have to remember that most people aren't that strict about language

I would bet that for most people they define the words like:

Hallucination - something that isn't real

Confabulation - a word that they have never heard of

resonious

I would go one step further and suppose that a lot of people just don't know what confabulation means.

static_void

We should not bend over backwards to use language the way ignorant people do.

AllegedAlec

We should not bend over backwards to use language the way anally retentive people demand we do.

furyofantares

I like communicating with people using a shared understanding of the words being used, even if I have an additional, different understanding of the words, which I can use with other people.

That's what words are, anyway.

add-sub-mul-div

"Bending over backwards" is a pretty ignorant metaphor for this situation, it describes explicit activity whereas letting people use metaphor loosely only requires passivity.

bee_rider

It seems like these are all anthropomorphic euphemisms for things that would otherwise be described as bugs, errors (in the “broken program” sense), or error (in the “accumulation of numerical error” sense), if LLMs didn’t have the easy-to-anthropomorphize chat interface.

diggan

Imagine you have function that is called "is_true" but it only gets it right 60% of the time. We're doing this within CS/ML, so lets call that "correctness" or something fancier. In order for that function to be valuable, would we need to hit a 100% in correctness? I mean probably most of the time, yeah. But sometimes, maybe even rarely, we're fine with it being less than 100%, but still as high as possible.

So in this point of view, it's not a bug or error that it currently sits at 60%, but if we manage to find a way to hit 70%, it would be better. But in order to figure this out, we need to call this "correct for most part, but could be better" concept something. So we look at what we already know and are familiar with, and try to draw parallels, maybe even borrow some names/words.

bee_rider

This doesn’t seem too different from my third thing, error (in the “accumulation of numerical error” sense).

timewizard

> but if we manage to find a way to hit 70%, it would be better.

Yet still absolutely worthless.

> "correct for most part, but could be better" concept something.

When humans do that we just call it "an error."

> so lets call that "correctness" or something

The appropriate term is "confidence." These LLM tools all could give you a confidence rating with each and every "fact" it attempts to relay to you. Of course they don't actually do that because no one would use a tool that confidently gives you answers based on a 70% self confidence rating.

We can quibble over terms but more appropriately this is just "garbage." It's a giant waste of energy and resources that produces flawed results. All of that money and effort could be better used elsewhere.

skybrian

It’s metaphor. A hardware “bug” is occasionally due to an actual insect in the machinery, but usually it isn’t, and for software bugs it couldn’t be.

The word “hallucination” was pretty appropriate for images made by DeepDream.

https://en.m.wikipedia.org/wiki/DeepDream

georgemcbay

They aren't really bugs though in the traditional sense because all LLMs ever do is "hallucinate", seeing what we call a hallucination as something fundamentally different than what we consider a correct response is further anthropomorphising the LLM.

We just label it with that word when it statistically generates something we know to be wrong, but functionally what it did in that case is no different than when it statistically generated something that we know to be correct.

rollcat

There's a simpler word for that: lying.

It's also equally wrong. Lying implies intent. Stop anthropomorphising language models.

sorcerer-mar

Lying is different from confabulation. As you say, lying implies intent. Confabulation does not necessarily, ergo it's a far better word than either lying or hallucinating.

A person with dementia confabulates a lot, which entails describing reality "incorrectly";, but it's not quite fair to describe it as lying.

bandrami

A liar seeks to hide the truth; a confabulator is indifferent to the truth entirely. It's an important distinction. True statements can still be confabulations.

maxbond

I think "apophenia" (attributing meaning to spurious connections) or "pareidolia" (the form of aphonenia where we see faces where there are none) would have been good choices, as well.

cratermoon

anthropoglossic systems.

Terr_

Largely Logorrhea Models.

matkoniecz

And why confabulation is better one of those?

Flemlo

So what's the amount of cases were it was wrong but no one checked?

add-sub-mul-div

Good point. People putting the least amount of effort into their job that they can get away with is universal, judges are no more immune to it than lawyers.

mullingitover

This seems like a perfect use case for a legal MCP server that can provide grounding for citations. Protomated already has one[1].

[1] https://github.com/protomated/legal-context

null

[deleted]

anshumankmr

Can we submit ChatGPT convo histories??