Skip to content(if available)orjump to list(if available)

Simple Explanation of LLMs

Simple Explanation of LLMs

19 comments

·March 4, 2025

A_D_E_P_T

It's all prediction. Wolfram has been saying this from the beginning, I think. It hasn't changed and it won't change.

But it could be argued that the human mind is fundamentally similar. That consciousness is the combination of a spatial-temporal sense with a future-oriented simulating function. Generally, instead of simulating words or tokens, the biological mind simulates physical concepts. (Needless to say, if you imagine and visualize a ball thrown through the air, you have simulated a physical and mathematical concept.) One's ability to internally form a representation of the world and one's place in it, coupled with a subjective and bounded idea of self in objective space and time, results in what is effectively a general predictive function which is capable of broad abstraction.

A large facet of what's called "intelligence" -- perhaps the largest facet -- is the strength and extensibility of the predictive function.

I really need to finish my book on this...

mdp2021

With the critical difference that predicting facts and predicting verisimility are massively different operations.

A_D_E_P_T

I don't think that anybody predicts "facts" -- there are no oracles, and if you predict a physical concept, it's very easy to get things wrong. Outcomes are, in some cases, almost statistical.

(A physical concept could be something as simple as how to catch a frisbee, or, alternatively, imagine a cat trying to predict how best to swipe at a fleeing mouse. If the mouse zigs when it could have zagged, the cat, for all its well-honed instincts, may miss. It may have predicted wrongly.)

Predicting tokens is really quite similar. I really think that it's the same type of thing.

Getting facts right is a matter of error correction and knowledgebase utilization, which is why "reasoning models" with error correction layers and RAG are so good.

mdp2021

> there are no oracles

If you mean "guessing without grounds", that is exactly the phenomenon which is expressed by bad thinkers in both the carbon and the silicon realms, and that is what we are countering.

> predict[ing] "facts"

It's called "Science". In a broader way, it's called "intelligence" ("Intelligence is being able to predict the outcomes of an experience you never had" ~~ Prof. Patrick Winston)

> Getting facts right is a matter of

It is a matter of procedurally adhering to an attitude of iterative quality refinement of ideas, and LLMs seem to be dramatically bad at "procedures".

nurettin

> Wolfram has been saying this from the beginning, I think.

Wolfram has been distinguishing between probabilistic output and deterministic output from a neural network since the beginning? Trying to monopolize on such basic concepts doesn't make much sense. It's like saying he has been thinking of sporks since the beginning.

oedemis

Hello, tried to explain Large Language Models with some visualizations, especially the attention mechanism.

itronitron

You should probably mention that embeddings are just a renaming of text vectors, aka vector space model, which have probably been used since before neural networks.

DebtDeflation

Would love to see a similar explanation of how "reasoning" versions of LLMs are trained. I understand that OpenAI was mum about how they specifically trained o1/o3 and that people are having to reverse engineer from the DeepSeek paper which may or may not be a different approach, but would like to see a coherent explanation which is not just an regurgitation of Chain of Thought or handwavy "special reasoning tokens give the model more time to think".

Philpax

This may be useful: https://www.interconnects.ai/p/deepseek-r1-recipe-for-o1

but the tl;dr of the idea is that we can use reinforcement learning on a strong base model (i.e. one that hasn't been fine tuned) to elicit the generation of tokens that help the model reach a result that can be verified to be correct. That is, if we have a way of verifying that a specific output is correct, the model can be trained to consistently produce tokens that will lead to that result for a given input, and that this facility generalises the more problems you train it on.

There are some more nuances (the Interconnects article goes into that), but that's the fundamental idea of Reinforcement Learning from Verifiable Rewards.

UltraSane

This paper [1] even claims that "models primed with incorrect solutions containing proper reasoning patterns achieve comparable performance to those trained on correct solutions."

[1] https://arxiv.org/abs/2503.01307

rco8786

I'm not sure if I would call this "simple" but I appreciated the walk through. I understood a lot of it at a high level before reading, and this helped solidify my understanding a bit more. Though it also serves to highlight just how complex LLMs actually are.

noodletheworld

While I appreciate the pictures, really at the end of the day all you have is a glossary and slightly more detailed arbitrary hand waving.

What specific architecture is used to build a basic model?

Why is that specific combination of basic building blocks used?

Why does it work when other similar ones don’t?

I generally approve of simplifications, but these LLM simplifications are too vague and broad to be useful or meaningful.

Here my challenge: take that article and write an LLM.

No?

How about an article on raytracing?

Anyone can do a raytracer in a weekend.

Why is building an LLM miles of explanation of concepts and nothing concrete you can actually build?

Where’s my “LLM in a weekend” that covers the theory and how to actually implement one?

The distinction between this and something like https://github.com/rasbt/LLMs-from-scratch is stark.

My hot take is, if you haven’t built one, you don’t actually understand how they work, you just have a kind of vague kind-of-heard of it understanding, which is not the same thing.

…maybe that’s harsh, and unfair. I’ll take it, maybe it is; but I’ve seen a lot of LLM explanations that conveniently stop before they get to the hard part of “and how do you actually do it?”, and another one? Eh.

hegx

Warning: these "fundamentals" will become obsolete faster than you can wrap your head around them.

raincole

They really don't tho. Transformer was invented in 2017.

amelius

It would be nice if there was a place (e.g. github repo) that tracked the best resources for learning this stuff.

null

[deleted]

betto

Why don't you come on my podcast to explain LLMs? I would love it.

https://www.youtube.com/@CouchX-SoftwareTechexplain-k9v