Skip to content(if available)orjump to list(if available)

Deep Learning Is Applied Topology

Deep Learning Is Applied Topology

199 comments

·May 20, 2025

colah3

Since this post is based on my 2014 blog post (https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/ ), I thought I might comment.

I tried really hard to use topology as a way to understand neural networks, for example in these follow ups:

- https://colah.github.io/posts/2014-10-Visualizing-MNIST/

- https://colah.github.io/posts/2015-01-Visualizing-Representa...

There are places I've found the topological perspective useful, but after a decade of grappling with trying to understand what goes on inside neural networks, I just haven't gotten that much traction out of it.

I've had a lot more success with:

* The linear representation hypothesis - The idea that "concepts" (features) correspond to directions in neural networks.

* The idea of circuits - networks of such connected concepts.

Some selected related writing:

- https://distill.pub/2020/circuits/zoom-in/

- https://transformer-circuits.pub/2022/mech-interp-essay/inde...

- https://transformer-circuits.pub/2025/attribution-graphs/bio...

montebicyclelo

Related to ways of understanding neural networks, I've seen these views expressed a lot, which to me seem like misconceptions:

- LLMs are basically just slightly better `n-gram` models

- The idea of "just" predicting the next token, as if next-token-prediction implies a model must be dumb

(I wonder if this [1] popular response to Karpathy's RNN [2] post is partly to blame for people equating language neural nets with n-gram models. The stochastic parrot paper [3] also somewhat equates LLMs and n-gram models, e.g. "although she primarily had n-gram models in mind, the conclusions remain apt and relevant". I guess there was a time where they were more equivalent, before the nets got really really good)

[1] https://nbviewer.org/gist/yoavg/d76121dfde2618422139

[2] https://karpathy.github.io/2015/05/21/rnn-effectiveness/

[3] https://dl.acm.org/doi/pdf/10.1145/3442188.3445922

colah3

I guess I'll plug my hobby horse:

The whole discourse of "stochastic parrots" and "do models understand" and so on is deeply unhealthy because it should be scientific questions about mechanism, and people don't have a vocabulary for discussing the range of mechanisms which might exist inside a neural network. So instead we have lots of arguments where people project meaning onto very fuzzy ideas and the argument doesn't ground out to scientific, empirical claims.

Our recent paper reverse engineers the computation neural networks use to answer in a number of interesting cases (https://transformer-circuits.pub/2025/attribution-graphs/bio... ). We find computation that one might informally describe as "multi-step inference", "planning", and so on. I think it's maybe clarifying for this, because it grounds out to very specific empirical claims about mechanism (which we test by intervention experiments).

Of course, one can disagree with the informal language we use. I'm happy for people to use whatever language they want! I think in an ideal world, we'd move more towards talking about concrete mechanism, and we need to develop ways to talk about these informally.

There was previous discussion of our paper here: https://news.ycombinator.com/item?id=43505748

HarHarVeryFunny

1) Isn't it unavoidable that a transformer - a sequential multi-layer architecture - is doing multi-step inference ?!

2) There are two aspects to a rhyming poem:

a) It is a poem, so must have a fairly high degree of thematic coherence

b) It rhymes, so must have end-of-line rhyming words

It seems that to learn to predict (hence generate) a rhyming poem, both of these requirements (theme/story continuation+rhyming) would need to be predicted ("planned") at least by the beginning of the line, since they are inter-related.

In contrast, a genre like freestyle rap may also rhyme, but flow is what matters and thematic coherence and rhyming may suffer as a result. In learning to predict (hence generate) freestyle, an LLM might therefore be expected to learn that genre-specific improv is what to expect, and that rhyming is of secondary importance, so one might expect less rhyme-based prediction ("planning") at the start of each bar (line).

lo_zamoyski

> The whole discourse of "stochastic parrots" and "do models understand" and so on is deeply unhealthy [...] So instead we have lots of arguments where people project meaning onto very fuzzy ideas and the argument doesn't ground out to scientific, empirical claims.

I would put it this way: the question "do LLMs, etc understand?" is rooted in a category mistake.

Meaning, I am not claiming that it is premature to answer such questions because we lack a sufficient grasp of neutral networks. I am asserting that LLMs don't understand, because the question of whether they do is like asking whether A-flat is yellow.

somewhereoutth

Regardless of the mechanism, the foundational 'conceit' of LLMs is that by dumping enough syntax (and only syntax) into a sufficiently complex system, the semantics can be induced to emerge.

Quite a stretch, in my opinion (cf. Plato's Cave).

mdp2021

Absolutely, the first task should be to understand how and why black boxes with emergent properties actually work, in order to further knowledge - but importantly, in order to improve them and build on the acquired knowledge to surpass them. That implies, curbing «parrot[ing]» and inadequate «understand[ing]».

I.e. those higher concepts are kept in mind as a goal. It is healthy: it keeps the aim alive.

visarga

My favorite argument against SP is zero shot translation. The model learns Japanese-English and Swahili-English and then can translate Japanese-Swahili directly. That shows something more than simple pattern matching happens inside.

Besides all arguments based on model capabilities, there is also an argument from usage - LLMs are more like pianos than parrots. People are playing the LLM on the keyboard, making them 'sing'. Pianos don't make music, but musicians with pianos do. Bender and Gebru talk about LLMs as if they work alone, with no human direction. Pianos are also dumb on their own.

agentcoops

1000%. It's really hard to express this to non-engineers who never wasted years of their life trying to work with n-grams and NLTK (even topic models) to make sense of textual data... Projects I dreamed of circa 2012 are now completely trivial. If you do have that comparison ready-at-hand, the problem of understanding what this mind-blowing leap means, to which end I find writing like the OP helpful, is so fascinating and something completely different than complaining that it's a "black box."

I've expressed this on here before, but it feels like the everyday reception of LLMs has been so damaged by the general public having just gotten a basic grasp on the existence of machine learning.

theahura

Thanks for the follow up. I've been following your circuits thread for several years now. I find the linear representation hypothesis very compelling, and I have a draft of a review for Toy Models of Superposition sitting in my notes. Circuits I find less compelling, since the analysis there feels very tied to the transformer architecture in specific, but what do I know.

Re linear representation hypothesis, surely it depends on the architecture? GANs, VAEs, CLIP, etc. seem to explicitly model manifolds. And even simple models will, due to optimization pressure, collapse similar-enough features into the same linear direction. I suppose it's hard to reconcile the manifold hypothesis with the empirical evidence that simple models will place similar-ish features in orthogonal directions, but surely that has more to do with the loss that is being optimized? In Toy Models of Superposition, you're using a MSE which effectively makes the model learn an autoencoder regression / compression task. Makes sense then that the interference patterns between co-occurring features would matter. But in a different setting, say a contrastive loss objective, I suspect you wouldn't see that same interference minimization behavior.

colah3

> Circuits I find less compelling, since the analysis there feels very tied to the transformer architecture in specific, but what do I know.

I don't think circuits is specific to transformers? Our work in the Transformer Circuits thread often is, but the original circuits work was done on convolutional vision models (https://distill.pub/2020/circuits/ )

> Re linear representation hypothesis, surely it depends on the architecture? GANs, VAEs, CLIP, etc. seem to explicitly model manifolds

(1) There are actually quite a few examples of seemingly linear representations in GANs, VAEs, etc (see discussion in Toy Models for examples).

(2) Linear representations aren't necessarily in tension with the manifold hypothesis.

(3) GANs/VAEs/etc modeling things as a latent gaussian space is actually way more natural if you allow superposition (which requires linear representations) since central limit theorem allows superposition to produce Gaussian-like distributions.

theahura

> the original circuits work was done on convolutional vision models

O neat, I haven't read that far back. Will add it to the reading list.

To flesh this out a bit, part of why I find circuits less compelling is because it seems intuitive to me that neural networks more or less smoothly blend 'process' and 'state'. As an intuition pump, a vector x matrix matmul in an MLP can be viewed as changing the basis of an input vector (ie the weights act as a process) or as a way to select specific pieces of information from a set of embedding rows (ie the weights act as state).

There are architectures that try to separate these out with varying degrees of success -- LSTMs and ResNets seem to have a more clear throughline of 'state' with various 'operations' that are applied to that state in sequence. But that seems really architecture-dependent.

I will openly admit though that I am very willing to be convinced by the circuits paradigm. I have a background in molecular bio and there's something very 'protein pathways' about it.

> Linear representations aren't necessarily in tension with the manifold hypothesis.

True! I suppose I was thinking about a 'strong' form of linear representations, which is something like: features are represented by linear combinations of neurons that display the same repulsion-geometries as observed in Toy Models, but that's not what you're saying / that's me jumping a step too far.

> GANs/VAEs/etc modeling things as a latent gaussian space is actually way more natural if you allow superposition

Superposition is one of those things that has always been so intuitive to me that I can't imagine it not being a part of neural network learning.

But I want to make sure I'm getting my terminology right -- why does superposition necessarily require the linear representation hypothesis? Or, to be more specific, does [individual neurons being used in combination with other neurons to represent more features than neurons] necessarily require [features are linear compositions of neurons]?

rajnathani

I was going to comment the same about the Superposition hypothesis [0], when the OP comment (edit: Update: The OP commenter is (as pointed by other HN comments, the cofounder of Anthropic) behind the Superposition research) mentioned about "I've had a lot more success with: * The linear representation hypothesis - The idea that "concepts" (features) correspond to directions in neural networks", as this concept-per-NN-feature idea seems too "basic" to explain some of the learning which NNs can do on datasets. On one of our custom trained neural network models (not LLM, but audio-based and currently proprietary) we noticed the same of the ML model being able to "overfit" on a large amount of data despite not many few parameters relative to the size of the dataset (and that too with dropout in early layers).

[0] https://www.anthropic.com/research/superposition-memorizatio...

j2kun

This has mirrored my experience attempting to "apply" topology in real world circumstances, off and on since I first studied topology in 2011.

I even hesitate now at the common refrain "real world data approximates a smooth, low dimensional manifold." I want to spend some time really investigating to what extent this claim actually holds for real world data, and to what extent it is distorted by the dimensionality reduction method we apply to natural data sets in order to promote efficiency. But alas, who has the time?

riemannzeta

I think it's interesting that in physics, different global symmetries (topological manifolds) can satisfy the same metric structure (local geometry). For example, the same metric tensor solution to Einstein's field equation can exist on topologically distinct manifolds. Conversely, looking at solutions to the Ising Model, we can say that the same lattice topology can have many different solutions, and when the system is near a critical point, the lattice topology doesn't even matter.

It's only an analogy, but it does suggest at least that the interesting details of the dynamics aren't embedded in the topology of the system. It's more complicated than that.

colah3

If you like symmetry, you might enjoy how symmetry falls out of circuit analysis of conv nets here:

https://distill.pub/2020/circuits/equivariance/

riemannzeta

Thanks for this additional link, which really underscores for me at least how you're right about patterns in circuits being a better abstraction layer for capturing interesting patterns than topological manifolds.

I wasn't familiar with the term "equivariance" but I "woke up" to this sort of approach to understanding deep neural networks when I read this paper, which shows how restricted boltzman machines have an exact mapping to the renormalization group approach used to study phase transitions in condensed matter and high energy physics:

https://arxiv.org/abs/1410.3831

At high enough energy, everything is symmetric. As energy begins to drain from the system, eventually every symmetry is broken. All fine structure emerges from the breaking of some symmetries.

I'd love to get more in the weeds on this work. I'm in my own local equilibrium of sorts doing much more mundane stuff.

dang

That earlier post had a few small HN discussions (for those interested):

Neural Networks, Manifolds, and Topology (2014) - https://news.ycombinator.com/item?id=19132702 - Feb 2019 (25 comments)

Neural Networks, Manifolds, and Topology (2014) - https://news.ycombinator.com/item?id=9814114 - July 2015 (7 comments)

Neural Networks, Manifolds, and Topology - https://news.ycombinator.com/item?id=7557964 - April 2014 (29 comments)

godelski

Loved these posts and they inspired a lot of my research and directions during my PhDs.

For anyone interested in these may I also suggest learning about normalizing flows? (They are the broader class to flow matching) They are learnable networks that learn coordinate changes. So the connection to geometry/topology is much more obvious. Of course the down side of flows is you're stuck with a constant dimension (well... sorta) but I still think they can help you understand a lot more of what's going on because you are working in a more interpretable environment

winwang

hey chris, I found your posts quite inspiring back then, with very poetic ideas. cool to see you follow up here!

adamnemecek

Consider looking into fields related to machine learning to see how topology is used there. The main problem is that some of the cool math did not survive the transition to CS, e.g. the math for control theory is not quite present in RL.

In terms of topology, control theory has some very cool topological interpretations, e.g. toruses appear quite a bit in control theory.

esafak

If it was topology we wouldn't bother to warp the manifold so we can do similarity search. No, it's geometry, with a metric. Just as in real life, we want to be able to compare things.

Topological transformation of the manifold happens during training too. That makes me wonder: how does the topology evolve during training? I imagine it violently changing at first before stabilizing, followed by geometric refinement. Here are some relevant papers:

* Topology and geometry of data manifold in deep learning (https://arxiv.org/abs/2204.08624)

* Topology of Deep Neural Networks (https://jmlr.org/papers/v21/20-345.html)

* Persistent Topological Features in Large Language Models (https://arxiv.org/abs/2410.11042)

* Deep learning as Ricci flow (https://www.nature.com/articles/s41598-024-74045-9)

theahura

> Topological transformation of the manifold happens during training too. That makes me wonder: how does the topology evolve during training?

If you've ever played with GANs or VAEs, you can actually answer this question! And the answer is more or less 'yes'. You can look at GANs at various checkpoints during training and see how different points in the high dimensional space move around (using tools like UMAP / TSNE).

> I imagine it violently changing at first before stabilizing, followed by geometric refinement

Also correct, though the violent changing at the beginning is also influenced the learning rate and the choice of optimizer.

esafak

And crucially, the initialization algorithm.

profchemai

Agree, if anything it's Applied Linear Algebra...but that sounds less exotic.

lostmsu

Well, we know it is non-linear. More like differential equations.

ComplexSystems

I really liked this article, though I don't know why the author is calling the idea of finding a separating surface between two classes of points "topology." For instance, they write

"If you are trying to learn a translation task — say, English to Spanish, or Images to Text — your model will learn a topology where bread is close to pan, or where that picture of a cat is close to the word cat."

This is everything that topology is not about: a notion of points being "close" or "far." If we have some topological space in which two points are "close," we can stretch the space so as to get the same topological space, but with the two points now "far". That's the whole point of the joke that the coffee cup and the donut are the same thing.

Instead, the entire thing seems to be a real-world application of something like algebraic geometry. We want to look for something like an algebraic variety the points are near. It's all about geometry and all about metrics between points. That's what it seems like to me, anyway.

srean

> This is everything that topology is not about

100 percent true.

I can only hope that in an article that is about two things, i) topology and ii) deep learning, the evident confusions are contained within one of them -- topology, only.

theahura

fair, I was using 'topology' more colloquially in that sentence. Should have said 'surface'.

srean

Ah! That clears it up.

You then mean Deep Learning has a lot in common with differential geometry and manifolds in general. That I will definitely agree with. DG and manifolds have far richer and informative structure than topology.

steppi

If I had to give a loose definition of topology, I would say that it is actually about studying spaces which have some notion of what is close and far, even if no metric exists. The core idea of neighborhoods in point set topology captures the idea of points being nearby another point, and allows defining things like continuity and sequence convergence which require a notion of closeness. From Wikipedia [0] for example

The terms 'nearby', 'arbitrarily small', and 'far apart' can all be made precise by using the concept of open sets. If we change the definition of 'open set', we change what continuous functions, compact sets, and connected sets are. Each choice of definition for 'open set' is called a topology. A set with a topology is called a topological space.

Metric spaces are an important class of topological spaces where a real, non-negative distance, also called a metric, can be defined on pairs of points in the set. Having a metric simplifies many proofs, and many of the most common topological spaces are metric spaces.

That's not to say that topology is necessarily the best lens for understanding neural networks, and the article's author has shown up in the comments to state he's moved on in his thinking. I'm just trying to clear up a misconception.

[0] https://en.wikipedia.org/wiki/General_topology

srean

The title, as it stands, is trite and wrong. More about that a little later. The article on the other hand is a pleasant read.

Topology is whatever little structure that remains in geometry after you throwaway distances, angles, orientations and all sorts of non tearing stretchings. It's that bare minimum that still remains valid after such violent deformations.

While notion of topology is definitely useful in machine learning, -- scale, distance, angles etc., all usually provide lots of essential information about the data.

If you want to distinguish between a tabby cat and a tiger it would be an act of stupidity to ignore scale.

Topology is useful especially when you cannot trust lengths, distances angles and arbitrary deformations. That happens, but to claim deep learning is applied topology is absurd, almost stupid.

theahura

> Topology is useful especially when you cannot trust lengths, distances angles and arbitrary deformations

But...you can't. The input data lives on a manifold that you cannot 'trust'. It doesn't mean anything apriori that an image of a coca-cola can and an image of a stopsign live close to each other in pixel space. The neural network applies all of those violent transformations you are talking about

srean

> But...you can't.

Only in a desperate sales pitch or a desparate research grants. There are of course some situations were certain measurements are untrustworthy, but to claim that is the common case is very snake oily.

When certain measurements become untrustworthy, that it does so only because of some unknown smooth transformation, is not very frequent (this is what purely topological methods will deal with). Random noise will also do that for you.

Not disputing the fact that sometimes metrics cannot be trusted entirely, but to go to a topological approach seems extreme. One should use as much of the relevant non-topological information as possible.

As the hackneyed example goes a topological methods would not be able to distinguish between a cup and a donut. For that you would need to trust non-topological features such as distances and angles. Deep learning methods can indeed differentiate between cop-nip and coffee mugs.

BTW I am completely on-board with the idea that data often looks as if it has been sampled from an unknown, potentially smooth, possibly non-Euclidean manifold and then corrupted by noise. In such cases recovering that manifold from noisy data is a very worthy cause.

In fact that is what most of your blogpost is about. But that's differential geometry and manifolds, they have structure far richer than a topology. For example they may have tangent planes, a Reimann metric or a symplectic form etc. A topological method would throw all of that away and focus on topology.

kentuckyrobby

I don't think that was their point, I think their point was that neural networks 'create' their optimization space by using lengths, distances, and angles. You can't reframe it from a topological standpoint, otherwise optimization spaces of some similar neural networks on similar problems would topologically comparable, which is not true.

theahura

Well, sorta. There is some evidence to suggest that neural networks learn 'universal' features (cf Anthropic's circuits thread). But I'll openly admit to being out of my depth here, and maybe I just don't understand OPs point

throwawaymaths

once you get into the nitty gritty, a lot of things that wouldn't matter if it were pure topology, do, like number of layers all the way to quantization/fp resolution

quantadev

The word "topology" has a legitimate dictionary definition, that has none of the requirements that you're asserting. I think what you're missing is that it has two definitions.

srean

In blog posts about specialised and technical topics it is expected that in-domain technical keywords that have long established definitions and meanings be used in the same technical sense. Otherwise it can become quite confusing. Gravity means gravity when we are talking Newtonian mechanics. Similarly, in math and ML 'topology' has a specific meaning.

quantadev

The word "topology" is quite commonly used in all kinds of books, papers, and technical materials any time they're discussing geometric characteristics of surfaces. The term is probably used 1000000 times more commonly in this more generic way than it's ever used in the strict pedantic way you're asserting that it must.

cvoss

The phrase "applied X" invokes the technical, scientific, or academic meaning of X. So for example, "applied chemistry" does not refer to one's experience on a dating app.

quantadev

The word "topology" is _much_ more commonly used as a general synonym for "surfaces" than in any other way.

soulofmischief

Thanks for sharing. I also tend to view learning in terms of manifolds. It's a powerful representation.

> I'm personally pretty convinced that, in a high enough dimensional space, this is indistinguishable from reasoning

I actually have journaled extensively about this and even written some on Hacker News about it with respect to what I've been calling probabilistic reasoning manifolds:

> This manifold is constructed via learning a decontextualized pattern space on a given set of inputs. Given the inherent probabilistic nature of sampling, true reasoning is expressed in terms of probabilities, not axioms. It may be possible to discover axioms by locating fixed points or attractors on the manifold, but ultimately you're looking at a probabilistic manifold constructed from your input set.

> But I don't think you can untie this "reasoning" from your input data. It's possible you will find "meta-reasoning", or similar structures found in any sufficiently advanced reasoning manifold, but these highly decontextualized structures might be entirely useless without proper recontextualization, necessitating that a reasoning manifold is trained on input whose patterns follow learnable underlying rules, if the manifold is to be useful for processing input of that kind.

> Decontextualization is learning, decomposing aspects of an input into context-agnostic relationships. But recontextualization is the other half of that, knowing how to take highly abstract, sometimes inexpressible, context-agnostic relationships and transform them into useful analysis in novel domains

Full comment: https://news.ycombinator.com/item?id=42871894

mjburgess

Are you talking about reasoning in general, reasoning qua that mental process which operates on (representations of) propositions?

In which case, I cannot understand " true reasoning is expressed in terms of probabilities, not axioms "

One of the features of reasoning is that it does not operate in this way. It's highly implausible animals would have been endowed with no ability to operate non-probabilistically on propositions represented by them, since this is essential for correct reasoning -- and a relatively trivial capability to provide.

Eg., "if the spider is in boxA, then it is not everywhere else" and so on

soulofmischief

Propositions are just predictions, they all come with some level of uncertainty even if we ignore that uncertainty for practical purposes.

Any validation of a theory is inherently statistical, as you must sample your environment with some level of precision across spacetime, and that level of precision correlates to the known accuracy of hypotheses. In other words, we can create axiomatic systems of logic, but ultimately any attempt to compare them to reality involves empirical sampling.

Unlike classical physics, our current understanding of quantum physics essentially allows for anything to be "possible" at large enough spacetime scales, even if it is never actually "probable". For example, quantum tunneling, where a quantum system might suddenly overcome an energy barrier despite lacking the required energy.

Every day when I walk outside my door and step onto the ground, I am operating on a belief that gravity will work the same way every time, that I won't suddenly pass through the Earth's crust or float into the sky. We often take such things for granted, as axiomatic, but ultimately all of our reasoning is based on statistical correlations. There is the ever-minute possibility that gravity suddenly stops working as expected.

> if the spider is in boxA, then it is not everywhere else

We can't even physically prove that. There's always some level of uncertainty which introduces probability into your reasoning. It's just convenient for us to say, "it's exceedingly unlikely in the entire age of the universe that a macroscopic spider will tunnel from Box A to Box B", and apply non-probabilistic heuristics.

It doesn't remove the probability, we just don't bother to consider it when making decisions because the energy required for accounting for such improbabilities outweighs the energy saved by not accounting for them.

As mentioned in my comment, there's also the possibility that universal axioms may be recoverable as fixed points in a reasoning manifold, or in some other transformation. If you view these probabilities as attractors on some surface, fixed points may represent "axioms" that are true or false under any contextual transformation.

mjburgess

This response doesn't fill me with confidence. You aren't really engaging with any of the actual issues your position entails.

A proposition is not a prediction. A prediction is either an estimate of the value of some quantity ("the dumb ML meaning of prediction") or a proposition which describes a future scenario. We can trivially enumerate propositions that do not describe future scenarios, eg., 2 + 2 = 4.

Uncertainty is a property of belief attitudes towards propositions, it isn't a feature of their semantic content. A person doesnt mean anything different by "2 + 2 = 4" if they are 80 or 90% sure of it.

> We can't even physically prove that.

Irrelevant. Our minds are not constrained by physical possibility, necessarily so, as we know very little about what is physically possible. I can imagine abitary number of cases, arising out of logical manipulation of propositons, that are not physically possible. (Eg., "Superman can lift any building. The empire state building is so-and-so a kind of building. Imagine(Superman lifting the empire state building)").

The infinite variety of our imagination is a trivial consequence of non-probabilistic operations on propositions, it's incomprehensibly implausible as a consequence of merely probabilistic ones.

That nature seems to have endowed minds with discrete operations, that these are empirical in operation across very wide classes of reasoning, including imagination, that these seem trivial for nature to provide (etc.) render the notion that they don't exist highly highly implausible.

There is nothing lacking explanation here. The relevant mental processes we have to hand are fairly obvious and fairly easy to explain.

Its an obvious act of credulity to try and find some way to make the latest trinkets of the recent rich some sort of miracle. All of these projects of "incredible abstraction" follow around these hype cycles, turning lead into gold: if x "is really" y, and y "is really" z, and ..., then x is amazin! This piles towers of every more general hollowed-out words on top of each other until the most trivial thing sounds like a wonder.

naasking

> It's highly implausible animals would have been endowed with no ability to operate non-probabilistically on propositions represented by them, since this is essential for correct reasoning

Why would animals need to evolve 100% correct reasoning if probabilistically correct reasoning suffices? If probabilistic reasoning is cheaper in terms of energy then correct reasoning is a disadvantage.

mjburgess

It doesnt suffice. It's also vastly energetically cheaper just to have (algorithmic) negation. Compressing (A, not A) into a probability function is extremely incomprehensibly expensive.

jvanderbot

I suspect, as a layperson who watches people make decisions all the time, that somewhere in our mind is a "certainty checker".

We don't do logic itself, we just create logic from certainty as part of verbal reasoning. It's our messy internal inference of likelihoods that causes us to pause and think, or dash forward with confidence, and convincing others is the only place we need things like "theorems".

This is the only way I can square things like intuition, writing to formalize thoughts, verbal argument, etc, with the fact that people are just so mushy all the time.

mjburgess

People are only mushy in their verbalised reasoning, because its the nature of such reasoning to handle hard cases. Animal cognition, at its basic levels, is incredibly refined and makes necessary use of logic, flawlessly, frequently.

This naive cynicism about our mental capacities is a product of this credulity about statistical AI. If one beings with an earnest study of animal intelligence, in order to describe it, it disappears. It's exactly and only a project of the child playing with his lego, certain that great engineering projects have little use for any more than stacking bricks.

umutisik

Data doesn't actually live on a manifold. It's an approximation used for thinking about data. Near total majority, if not 100%, of the useful things done in deep learning have come from not thinking about topology in any way. Deep learning is not applied anything, it's an empirical field advanced mostly by trial and error and, sure, a few intuitions coming from theory (that was not topology).

sota_pop

I disagree with this wholeheartedly. Sure, there is lots of trial and error, but it’s more an amalgamation of theory from many areas of mathematics including but not limited to: topology, geometry, game theory, calculus, and statistics. The very foundations (i.e. back-propagation) is just the chain rule applied to the weights. The difference is that deep learning has become such an accessible (sic profitable) field that many practitioners have the luxury of learning the subject without having to learn the origins of the formalisms. Ultimately allowing them to utilize or “reinvent” theories and techniques often without knowing they have been around in other fields for much longer.

saberience

None of the major aspects of deep learning came from manifolds though.

It is primarily linear algebra, calculus, probability theory and statistics, secondarily you could add something like information theory for ideas like entropy, loss functions etc.

But really, if "manifolds" had never been invented/conceptualized, we would still have deep learning now, it really made zero impact on the actual practical technology we are all using every day now.

qbit42

Loss landscapes can be viewed as manifolds. Adagrad/ADAM adjust SGD to better fit the local geometry and are widely used in practice.

kwertzzz

Can you give an example where theories and techniques from other fields are reinvented? I would be genuinely interested for concrete examples. Such "reinventions" happen quite often in science, so to some degree this would be expected.

srean

Bethe ansatz is one. It took a toure de force by Yedidia to recognize that loopy belief propagation is computing the stationary point of Bethe's approximation to Free Energy.

Many statistical thermodynamics ideas were reinvented in ML.

Same is true for mirror descent. It was independently discovered by Warmuth and his students as Bregman divergence proximal minimization, or as a special case would have it, exponential gradient algorithms.

One can keep going.

nickpsecurity

One might add 8-16-bit training and quantization. Also, computing semi-unreliable values with error correction. Such tricks have been used in embedded, software development on MCU's for some time.

whatever1

I mean the entire domain of systems control is being reinvented by deep RL. System identification, stability, robustness etc

behnamoh

> a few intuitions coming from theory (that was not topology).

I think these 'intuitions' are an after-the-fact thing, meaning AFTER deep learning comes up with a method, researchers in other fields of science notice the similarities between the deep learning approach and their (possibly decades old) methods. Here's an example where the author discovers that GPT is really the same computational problems he has solved in physics before:

https://ondrejcertik.com/blog/2023/03/fastgpt-faster-than-py...

ogogmad

I beg to differ. It's complete hyperbole to suggest that the article said "it's the same problem as something in physics", given this statement:

     It seems that the bottleneck algorithm in GPT-2 inference is matrix-matrix multiplication. For physicists like us, matrix-matrix multiplication is very familiar, *unlike other aspects of AI and ML* [emphasis mine]. Finding this familiar ground inspired us to approach GPT-2 like any other numerical computing problem.
Note: Matrix-matrix multiplication is basic mathematics, and not remotely interesting as physics. It's not physically interesting.

bee_rider

Agreed.

Although, to try to see it from the author’s perspective, it is pulling tools out of the same (extremely well developed and studied in it’s own right) toolbox as computational physics does. It is a little funny although not too surprising that a computational physics guy would look at some linear algebra code and immediately see the similarity.

Edit: actually, thinking a little more, it is basically absurd to believe that somebody has had a career in computational physics without knowing they are relying heavily on the HPC/scientific computing/numerical linear algebra toolbox. So, I think they are just using that to help with the narrative for the blog post.

constantcrying

You are exactly right, after deep learning researchers had invented Adam for SGD, numerical analysts finally discovered Gradient descent. And after the first neural net was discovered, finally the matrix was invented in the novel field of linear algebra.

null

[deleted]

theahura

I say this as someone who has been in deep learning for over a decade now: this is pretty wrong, both on the merits (data obviously lives on a manifold) and on its applications to deep learning (cf chris olah's blog as an example from 2014, which is linked in my post -- https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/). Embedding spaces are called 'spaces' for a reason. GANs, VAEs, contrastive losses -- all of these are about constructing vector manifolds that you can 'walk' to produce different kinds of data.

umutisik

If data did live on a manifold contained, e.g. images in R^{n^2}, then it wouldn't have thickness or branching, which it does. It's an imperfect approximation to help think about it. The use of mathematical language is not the same as an application of mathematics (and the use of the word 'space' there is not about topology).

almostgotcaught

You're citing a guy that never went to college (has no math or physics degree), has never published a paper, etc. I guess that actually tracks pretty well with how strong the whole "it's deep theory" claim is.

theahura

Chris Olah? One of the founders of Anthropic and the head of their interpretability team?

niemandhier

It’s alchemy.

Deep learning in its current form relates to a hypothetical underlying theory as alchemy does to chemistry.

In a few hundred years the Inuktitut speaking high schoolers of the civilisation that comes after us will learn that this strange word “deep learning” is a left over from the lingua franca of yore.

adamnemecek

Not really, most of the current approaches are some approximations of the partition function.

fmap

The reason deep learning is alchemy is that none of these deep theories have predictive ability.

Essentially all practical models are discovered by trial and error and then "explained" after the fact. In many papers you read a few paragraphs of derivation followed by a simpler formulation that "works better in practice". E.g., diffusion models: here's how to invert the forward diffusion process, but actually we don't use this, because gradient descent on the inverse log likelihood works better. For bonus points the paper might come up with an impressive name for the simple thing.

In most other fields you would not get away with this. Your reviewers would point this out and you'd have to reformulate the paper as an experience report, perhaps with a section about "preliminary progress towards theoretical understanding". If your theory doesn't match what you do in practice - and indeed many random approaches will kind of work (!) - then it's not a good theory.

esafak

It does if you relax your definition to accommodate approximation error, cf. e.g., Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning (https://aclanthology.org/2021.acl-long.568.pdf)

Koshkin

> Data doesn't actually live on a manifold.

Often, they do (and then they are called "sheaves").

wenc

Many types of data don’t. Disconnected spaces like integer spaces don’t sit on a manifold (they are lattices). Spiky noisy fragmented data don’t sit on a (smooth) manifold.

In fact not all ML models treat data as manifolds. Nearest neighbors, decision trees don’t require the manifold assumption and actually work better without it.

qbit42

Any reasonable statistical explanation of deep learning requires there to be some sort of low dimensional latent structure in the data. Otherwise, we would not have enough training data to learn good models, given how high the ambient dimensions are for most problems.

theahura

It turns out a lot of disconnected spaces can be approximated by smooth ones that have really sharp boundaries, which more or less seems to be how neural networks will approximate something like discrete tokens

motoboi

Your comment sits in the nice gradient between not seeing at all the obvious relationships between deep learning and topology and thinking that deep learning is applied topology.

See? Everything lives in the manifold.

Now for a great visualization about the Manifold Hypothesis I cannot recommend more this video: https://www.youtube.com/watch?v=pdNYw6qwuNc

That helps to visualize how the activation functions, bias and weights (linear transformations) serve to stretch the high dimensional space so that data go into extremes and become easy to put in a high dimension, low dimensional object (the manifold) where is trivial to classify or separate.

Gaining an intuition about this process will make some deep learning practices so much easy to understand.

thuuuomas

I cannot understand this prideful resentment of theory common among self-described practitioners.

Even if existing theory is inadequate, would an operating theory not be beneficial?

Or is the mystique combined with guess&check drudgery job security?

canjobear

If there were theory that led to directly useful results (like, telling you the right hyperparameters to use for your data in a simple way, or giving you a new kind of regularization that you can drop in to dramatically improve learning) then deep learning practitioners would love it. As it currently stands, such theories don't really exist.

theahura

This is way too rigorous. You can absolutely have theories that lead to useful results even if they aren't as predictive as you describe. Theory of evolution for an obvious counterpoint.

fiddlerwoaroof

Useful theories only come to exist because someone started by saying they must exist and then spent years or lifetimes discovering them.

jebarker

There are strong incentives to leave theory as technical debt and keep charging forward. I don't think it's resentment of theory, everyone would love a theory if one were available but very few are willing to forgoe the near term rewards to pursue theory. Also it's really hard.

lumost

There are many reasons to believe a theory may not be forthcoming, or that if it is available may not be useful.

For instance, we do not have consensus on what a theory should accomplish - should it provide convergence bounds/capability bounds? Should it predict optimal parameter counts/shapes? Should it allow more efficient calculation of optimal weights? Does it need to do these tasks in linear time?

Even materials science in metals is still cycling through theoretical models after thousands of years of making steel and other alloys.

hiddencost

Maybe a little less with the ad hominems? The OP is providing an accurate description of an extremely immature field.

cnity

Many mathematicians are (rightly, IMO) allergic to assertions that certain branches are not useful (explicit in OP) and especially so if they are dismissive of attempts to understand complicated real world phenomema (implicit in OP, if you ask me).

danielmarkbruce

Who is proud? What you are seeing in some cases is eye rolling. And it's fair eye rolling.

There is an enormous amount of theory used in the various parts of building models, there just isn't an overarching theory at the very most convenient level of abstraction.

It almost has to be this way. If there was some neat theory, people would use it and build even more complex things on top of it in an experimental way and then so on.

profchemai

Once I read "This has been enough to get us to AGI.", credibility took a nose dive.

In general it's a nice idea, but the blogpost is very fluffy, especially once it connects it to reasoning, there is serious technical work in this area (i.g. https://arxiv.org/abs/1402.1869) that has expanded this idea and made it more concrete.

vayllon

Another type of topology you’ll encounter in deep neural networks (DNNs) is network topology. This refers to the structure of the network — how the nodes are connected and how data flows between them. We already have several well-known examples, such as auto-encoders, convolutional neural networks (CNNs), and generative adversarial networks (GANs), all of which are bio-inspired.

However, we still have much to learn about the topology of the brain and its functional connectivity. In the coming years, we are likely to discover new architectures — both internal within individual layers/nodes and in the ways specialized networks connect and interact with each other.

Additionally, the brain doesn’t rely on a single network, but rather on several ones — often referred to as the "Big 7" — that operate in parallel and are deeply interconnected. Some of these include the Default Mode Network (DMN), the Central Executive Network (CEN) or the Limbic Network, among others. In fact, a single neuron can be part of multiple networks, each serving different functions.

We have not yet been able to fully replicate this complexity in artificial systems, and there is still much to be learned and inspired by from this "network topologies".

So, "Topology is all you need" :-)

lesostep

>as long as we can separate good from bad we can train a neural network to sort out the topology for us.

10-ish years ago, I saw a project training networks to guess biological sex from face photos. They carefully removed makeup, moustache, hair, etc, so the model would be unbiased, yet they only reached 70 to 80% correct guesses. Yet it seemed like a great result, and they were trying to reach 99%.

First thing I did after reading their paper, was seek a paper where people would try and guess the biological sex from similar photos. And people weren't that much better at it. The difference between people and machine guessing was 1 or 2 percent.

I asked the guys that run the project, how they proved that such a division, based only on a photo, was even possible. They didn't understand the question, they just assumed that you can do it.

They couldn't improve their results in the end. Maybe they sucked at teaching neural networks, or maybe a lot of faces just are unisex if you remove gender markers.

I bring this anecdote because this guys, in my eyes, made a reasonable assumption. An assumption that since they in most situations can guess what's in someone pants by seeing someones face, the face has this information.

The assumption that we could somehow separate good from bad, when we rewrite school books every year, when we try to calculate "half-life of knowledge", when philosophy as a science isn't over, and every day there are political and ideological debates about what's best, is a very-very unreasonable assumption.

lesostep

I forgot to conclude:

In the end, it's not even reasonable to assume that such a divide between "good" and "bad" exists at all.

terabytest

I'm confused by the author's diagram claiming that AGI/ASI are points on the same manifold as next token prediction, chat models, and CoT models. While the latter three are provably part of the same manifold, what justifies placing AGI/ASI there too?

What if the models capable of CoT aren't and will never be, regardless of topological manipulation, capable of processes that could be considered AGI? For example, human intelligence (the closest thing we know to AGI) requires extremely complex sensory and internal feedback loops and continuous processing unlike autoregressive models' discrete processing.

As a layman, this matches my intuition that LLMs are not at all in the same family of systems as the ones capable of generating intelligence or consciousness.

theahura

Possible. AGI/ASI are poorly defined. I tend to think we're already at AGI, obviously many disagree.

> For example, human intelligence (the closest thing we know to AGI) requires extremely complex sensory and internal feedback loops and continuous processing unlike autoregressive models' discrete processing.

I've done a fair bit of connectomics research and I think that this framing elides the ways in which neural networks and biological networks are actually quite similar. For example, in mice olfactory systems there is something akin to a 'feature vector' that appears based on which neurons light up. Specific sets of neurons lighting up means 'chocolate' or 'lemon' or whatever. More generally, it seems like neuronal representations are somewhat similar to embedding representations, and you could imagine constructing an embedding space based on what neurons light up where. Everything on top of the embeddings is 'just' processing.

fusionadvocate

I believe we already have the technology required for AGI. It perhaps is analogous to a lunar manned station or a 2 mile tall skyscrapper. We have the technology required to build it, but we don't for various reasons.

ada1981

For the last few years, I've been "seeing maps" whenever I think about LLMs. It's always felt like the most natural way to understand what is going on.

It's also for this reason that I think new knowledge is discoverable from with in LLMs.

I imagine having a topographic map of some island that has only been explored partially by humans. But if I know the surrounding topography, I can make pretty accurate guesses about the areas I haven't been. And I think the same thing can be applied to certain areas of human knowledge, especially when represented as text or symbolically.

_alternator_

The question is not so much whether this is true—we can certainly represent any data as points on a manifold. Rather, it’s the extent to which this point of view is useful. In my experience, it’s not the most powerful perspective.

In short, direct manifold learning is not really tractable as an algorithmic approach. The most powerful set of tools and theoretical basis for AI has sprung from statistical optimization theory (SGD, information-theoretical loss minimization, etc.). The fact that data is on a manifold is a tautological footnote to this approach.