TopoNets: High performing vision and language models with brain-like topography
35 comments
·January 31, 2025energy123
AYBABTME
Locality of data and computation is very important in neural nets. It's the number one reason why training and inference are as slow as they are. It's why GPUs need super expensive HBM memory, why NVLink is a thing, why Infiniband is a thing.
If the problem of training and inference on neural networks can be optimized so that a topology can be used to keep closely related data together, we will see huge advancements in training and inference speed, and probably in model size as a result.
And speed isn't just speed. Speed makes impossible (not enough time in our lifetime) things possible.
A huge factor in Deepseek being able to train on H800 (half HBM bandwith as H100) is that they used GPU cores to compress/decompress the data moved around between the GPU memory and the compute units. This reduces latency in accessing data and made up for the slower memory bandwith (which translates in higher latency when fetching data). Anything that reduces the latency of memory accesses is a huge accelerator for neural nets. The number one way to achieve this is to keep related data next to each other, so that it fits in the closest caches possible.
mirekrusin
It's true, but isn't OP also correct? Ie. it's about speed, which implies locality, which implies approaches like MoE which does exactly that and it's unlike physical brain topology?
Having said that it would be fun to see things like rearrangement data moves based on temerature of silicon parts after training cycle.
vlovich123
Unless GPUs work markedly differently somehow or there’s been some fundamental shift in computer architecture I’m not aware of, spatial locality is still a factor in computers.
Aside from HW acceleration today, designs like Cebras would benefit heavily by reducing the amount of random access from accessing the weights (and thus freeing up cross-chip memory bandwidth for other things).
whynotminot
This makes me remember game developers back when games could still be played directly from the physical disc. They would often duplicate data to different parts of the disc, knowing that certain data would often be streamed from disc together, so that seek times were minimized.
But those game devs knew where everything was spatially on the disc, and how the data would generally be used during gameplay. It was consistent.
Do engineers have a lot of insight into how models get loaded spatially onto a given GPU at run time? Is this constant? Is it variable on a per GPU basis? I would think it would have to be.
Hard to optimize for this.
jaek
This brings to mind The Story of Mel from programming folklore.
harles
That could explain compute efficiency, but has nothing to do with the parameter efficiency pointed at in the paper.
jv22222
I had this idea the other day. Not sure if it relates but maybe?
https://twitter.com/justinvincent/status/1884357300703400274
TZubiri
Maybe this would be relevant for datacenters with significant distance between machines, or multidatacenter systems.
xpl
> So what's the motivation here?
Better interpretability, I suppose. Could give insights into how cognition works.
mayukhdeb
The motivation was to induce structure in the weights of neural nets and see if the functional organization that emerges aligns with that of the brain or not. Turns out, it does -- both for vision and language.
The gains in parameter efficiency was a surprise even to us when we first tried it out.
energy123
That's true, and interpretability is helpful for AI safety.
mayukhdeb
Indeed. What's cool is that we were able to localize literal "regions" in the GPTs which encoded toxic concepts related to racism, politics, etc. A similar video can be found here: https://toponets.github.io
More work is being done on this as we speak.
mercer
I imagine it could be easier to make sense of the 'biological' patterns that way? like, having bottlenecks or spatially-related challenges might have to be simulated too, to make sense of the ingested 'biological' information.
ziofill
Perhaps they are more easily compressible? Once a bunch of nearby weights have similar roles one may not need all of them.
mayukhdeb
Yep. That is exactly the idea here. Our compression method is super duper naive. We literally keep every n-th weight column and discard the rest. Turns out that even after getting rid of 80% of the weight columns in this way, we were able to retain the same performance in a 125M GPT.
null
FrereKhan
This paper imports an arbitrarily-chosen aspect of cortical architecture — topological maps of function — and ignores every other aspect of biological neural tissue. The resulting models show lower performance for the same number of parameters — not surprising, since they are more constrained compared with baseline. They may be slightly more robust against pruning — not surprising, since they are more regularised.
The figures show individual seeds, presumably, with no statistical analysis in the performance or pruning comparisons, so the null hypothesis is there is no difference between toponets and baseline. I would never let this paper be submitted by my team.
We haven't learned anything about the brain, or about ANNs.
slama
The title here doesn't seem to match. The paper is called "TopoNets: High Performing Vision and Language Models with Brain-Like Topography"
Even with their new method, models with topography seem to perform worse than models without.
dang
Submitted title was "Inducing brain-like structure in GPT's weights makes them parameter efficient". We've reverted it now in keeping with the site guidelines (https://news.ycombinator.com/newsguidelines.html).
Since the submitter appears to be one of the authors, maybe they can explain the connection between the two titles? (Or maybe they already have! I haven't read the entire thread)
LZ_Khan
Shouldn't there be a comparison in performance on common benchmarks to other models?
Like a 7B toponet model vs a 7B Llama model?
As a layperson I don't understand why topology is a thing to optimize for.
TOMDM
The only potential benefit shown in the paper is the topologically local models seem to be more resilient after pruning.
So you may be able to prune a 7B model down to 6B while maintaining most of the capability.
mayukhdeb
> The only potential benefit
Other benefits:
1. Significantly lower dimensionality of internal representations 2. More interpretable (see: https://toponets.github.io)
> 7B model down to 6B
We remove ~80% of the parameters in topographic layers and retain the same performance in the model. The drop in parameter count is not significant because we did not experiment with applying TopoLoss in all of the layers of the model (did not align with the goal of the paper)
We are currently performing those strong sparsity experiments internally, and the results look very promising!
michalsustr
The blurring in the sheets and the topo loss reminded me of https://arxiv.org/abs/2408.05446
vessenes
I hate to dog on research papers. They’re work to write. That said, I think this paper is not likely to be of interest to AI researchers — instead it may be of interest to Neuroscience folks or other brain research types.
The lede — adding topography worsens networks at similar weights — is not only buried, it’s obscured with statements claiming that topo networks show less upheaval when scaled down, e.g. they are more efficient than similar weight networks.
It’s hard for me to see how both these things can be true — the graphs show the more topography is added, the worse the networks perform at the trained model sizes.
To have the second statement “They compress better and are therefore more efficient” also be true, I think you’d need to show a pretty remarkable claim, which is that while a model trained at the same scale as a llama architecture is worse, when you scale them both down, this model becomes not only better than the scaled down llama, but also better than a natively trained model at the new smaller scale.
There is no proof of this in the paper, and good reason to be skeptical of this idea based on the data presented.
That said, like a lot of ideas in AI, this .. works! You can train a model successfully imposing these outside structures on it, and that model doesn’t even suck very much. Which is a cool statement about complexity theory and the resilience of these architectures, in my opinion. But I don’t think it says much else about either the brain or underlying AI ‘truths’.
null
devmor
Is this "brain-like" in any functional way, or "brain-like" in the same way that a tall rectangle is "door-like" even if it doesn't share any functions with a door?
I know quite a bit about machine learning, but very little to nothing about neuroscience and human cognition, so I am curious how an expert (that didn't work on the paper) would describe it.
(Forgive me for the pre-emptive negativity but I am so utterly exhausted by dishonest comparisons to sapient thought in the field of artificial intelligence that it has nearly drained me of the incredible amount of enthusiasm I used to carry for it.)
akokanka
[dead]
light_hue_1
They bury the part where inducing brain like structure hurts performance!
This is a method to just hurt your network in exchange for nothing useful at all aside from some sketchy story that this is "brain like".
mayukhdeb
Our goal was never to optimize for performance. There's a long standing hypothesis that topographic structure in the human brain leads to metabolic efficiency. Thanks to topography in ANNs, we were able to test out this hypothesis in a computational setting.
> sketchy story this is "brain like".
we reproduce the hallmarks of functional organization seen in the visual and language cortex of the brain. I encourage you to read the paper before making such comments
The main reason topography emerges in physical brains is because spatially distant connections are physically difficult and expensive in biological systems. Artificial neural nets have no such trade-off. So what's the motivation here? I can understand this might be a very good regularizer, so it could help with generalization error on small-data tasks. But hard to see why this should be on the critical path to AGI. As compute and data grows, you want less inductive bias. For example, CNN will beat ViT on small data tasks, but that flips with enough scale because ViT imposes less inductive bias. Or at least any inductive bias should be chosen because it models the structure of the data well, such as with causal transformers and language.