Discovering what we think we know is wrong
12 comments
·July 18, 2025ants_everywhere
zdragnar
If you simply ask Gemini what the brain uses for fuel, it gives an entirely different answer that leaves fatty acids out completely and reinforces the glucose story.
LLMs tell you what you want to hear, sourced from a random sample of data, not what you need to, based on any professional/expert opinion.
ants_everywhere
When I ask the same question it says primarily glucose and also mentions ketone bodies. It mentions that the brain is flexible and while it normally metabolizes glucose it may sometimes need to metabolize other things. This is both at gemini.google.com and using google.com in "AI mode" in private browsing.
gemini.google.com mentions lactate and fat. But it also knows I care about science. I'm not sure how much history is used currently.
But this is kind of silly because if you're a member of the public and ask a scientist what the brain uses as fuel they'll also say glucose. If you've ever been in a conversation with someone who felt the need to tell you *every detail* of everything they know, then you'll understand that that's not how human communication typically works. So if you want something more specific you have to start the conversation in a way that elicits it.
1970-01-01
What facts did it hallucinate and which are true?
zahlman
I get that this is intended to be parsed "Discovering (what we think we know) is (wrong)", but it took me a while to discard the alternative "discovering (what we think (we know is wrong))".
Sniffnoy
I think this could use a more informative title? The title this was posted with is actually less informative than the original title.
strangattractor
Derek has a little thought experiment at the end.
barisozmen
Answer to his though experiment: Yes, I believe a sufficiently advanced AI could told us that. Scientists who have been fed with wrong information can come up with completely new ideas. Making what we know less wrong.
That being said, I don't think current token-predictors can do that.
tptacek
My read of this was that AI is fundamentally limited by the lack of access to the new empirical data that drove this discovery; that it couldn't have been inferred from the existing corpus of knowledge.
readthenotes1
Maybe an AI will be smart enough to realize that there's more than one explanation for a low level of triglycerides in neurons.
The RICE myth and the lactic acid myth will surely be a part of the training material so the AI will realize that there's a fair amount of unjustified conclusions in the bioworld
ethan_smith
The RICE protocol (Rest, Ice, Compression, Elevation) for injuries has been largely debunked - inflammation is now understood as a necessary healing process. Similarly, lactic acid was wrongly blamed for muscle soreness when it's actually a fuel source during exercise, paralleling how we misunderstood neuronal fatty acid metabolism.
zahlman
Is inflammation not still considered to be harmful in the long term? (Is that not why we're still expected to care about omega-6 vs omega-3 dietary fatty acids?) What is the new explanation for muscle soreness?
> So let’s ask ourselves: would AI have told us this?
Why ask ourselves, when we can ask the AI? Here's the start of my conversation with Gemini:
> Me: What is known about fatty acid combustion in the brain?
> Gemini: The Brain's Surprising Use of Fat for Fuel For a long time, the brain was thought to rely almost exclusively on glucose for its immense energy needs. However, a growing body of research reveals that fatty acid combustion, or beta-oxidation, plays a more significant and complex role in brain energy metabolism and overall neurological health than previously understood. While not the brain's primary fuel source under normal conditions, the breakdown of fatty acids is crucial for various functions, particularly within specialized brain cells and under specific physiological states....
It cites a variety of articles going back at least to the 1990s.
So
> would AI have told us this?
Yes and it did