How outdated information hides in LLM token generation probabilities
15 comments
·January 10, 2025ascorbic
freehorse
Does it search the internet for that? I assume so because else claiming how often something is cited does not make sense, but would be interesting to know surely. Even gpt4o mini with kagi gets it right with search enabled (and wrong without search enabled - tried over a few times to make sure).
sd9
I don’t think the public o1 can search the internet yet, unlike 4o. In principle it could know that something is more commonly cited based on its training data. But it could also just be hallucinating.
asl2D
Can the claim about citation frequency be just an answer pattern and not model's exact reasoning?
freehorse
Yeah there could be parts of the training set with 1611 being explicitly called the official and 1622 being explicitly called the most common answer. But it can also have access to search results directly I think. Is there a way to know if it does or not?
blueflow
How could a language model infer that the official information overrules anything else?
ben_w
Same way as we can: learning which sources are more trustworthy.
There's limits to how far you can go with this — not only do humans make mistakes with this, but even in the abstract theoretical it can never be perfect: https://en.wikipedia.org/wiki/Münchhausen_trilemma — but it is still the "how".
mistercow
I’m not sure what kind of response you’re looking for, or if this is a rhetorical question or not. But “how could a language model infer…?” can be asked about a whole lot of things that language models have no problem reliably inferring.
blueflow
> that language models have no problem reliably inferring
... the article did give me a different impression.
0xKelsey
> The scenario that I’m worried about, and that is playing out right now, is that they get good enough that we (or our leaders) become overconfident in their abilities and start integrating them into applications that they just aren’t ready for without a proper understanding of their limitations.
Very true.
Terr_
> Welcome to the era of generative AI, where a mountain can have multiple heights, but also only one height, and the balance of my bank account gets to determine which one that is. All invisible to the end user and then rationalised away as a coincidence.
I've always found the idea of untraceable, unfixable, unpredictable bugs in software... Offensive. Dirty. Unprofessional.
So the last couple years have been been disconcerting, as a non-trivial portion of people who I thought felt similarly started to overlook it in LLMs, while also integrating those LLMs into flows where the bad-output can't even be detected.
choeger
As it turns out, correctness very often simply doesn't matter. Or not as much as one would intuitively think.
How many shops are there optimizing "business strategies" with data that's -essentially- garbage?
croes
For that LLMs are good but I bet some people want to use it for things where correctness is vital.
delusional
> How many shops are there optimizing "business strategies" with data that's -essentially- garbage?
How many of those shops are knowingly optimizing with garbage?
I'd argue that most of this data, which I would agree is garbage, is actually processed into seemingly good data through the complex and highly human process of self-deception and lies.
You don't tell the boss that the system you worked 2 month on is generating garbage, because then he'll replace your with someone who wouldn't tell him that. Instead you skirt evaluating it, even though you know better, and tell him that it's working fine. If the idiot chooses to do something stupid with your bad data, then that's his problem.
The o1 example is interesting. In the CoT summary it acknowledges that the most recent official information is 1611m, but it then chooses to say 1622 because it's more commonly cited. It's like it over-thinks itself into the wrong answer.