Skip to content(if available)orjump to list(if available)

Neuromorphic computing

Neuromorphic computing

43 comments

·June 5, 2025

datameta

I could be mistaken with this nitpick but isn't there a unit mismatch in "...just 20 watts—the same amount of electricity that powers two LED lightbulbs for 24 hours..."?

rcoveson

Just 20 watts, the same amount of electricity that powers 2 LED lightbulbs for 24 hours, one nanosecond, or twelve-thousand years.

null

[deleted]

DavidVoid

There is indeed; Watts aren't energy, and it's a common enough mistake that Technology Connections made a pretty good 52 minute video about it the other month [1].

[1]: https://www.youtube.com/watch?v=OOK5xkFijPc

quantum_state

Surprising that the article was not reviewed enough to ensure accurate use of basic physics concepts .. from LANL!!!

kokanee

Philosophical thought: if the aim of this field is to create an artificial human brain, then it would be fair to say that the more advanced the field becomes, the less difference there is between the artificial brain and a real brain. This begs two questions:

1) Is the ultimate form of this technology ethically distinguishable from a slave?

2) Is there an ethical difference between bioengineering an actual human brain for computing purposes, versus constructing a digital version that is functionally identical?

layer8

For most applications, we don’t want “functionally identical”. We do not want it to have its own desires and a will of its own, biological(-analogous) needs, having a circadian rhythm, getting tired and needing sleep, being subject to mood changes and emotional swings, feeling pain, having a sexual drive, needing recognition and validation, and so on. So we don’t want to copy the neural and bodily correlates that give rise to those phenomena, which arguably are not essential to how the human brain manages to have the intelligence it has. That is likely to drastically change the ethics of it. We will have to learn more about how those things work in the brain to avoid the undesirables.

kokanee

If we back away from philosophy and think like engineers, I think you're entirely right and the question should be moot. I can't help but think, though, that in spite of it all, the Elon Musks and Sam Altmans of the future will not be stopped from attempting to create something indistinguishable from flesh and blood.

tough

I mean have you watched Westworld?

falcor84

In my opinion, one of the best works of fiction exploring this is qntm's "Lena" - https://qntm.org/mmacevedo

nis0s

Sorry, but no I think it overemphasizes the parent over the resultant progeny for no reason, and as such I think the story is limited in its vision and treatment of the subject.

falcor84

Please say more

dlivingston

To 1) and 2), assuming a digital consciousness capable of self-awareness and introspection, I think the answer is clearly 'no'.

But:

> it would be fair to say that the more advanced the field becomes, the less difference there is between the artificial brain and a real brain.

I don't think it would be fair to say this. LLMs are certainly not worthy of ethical considerations. Consciousness needs to be demonstratable. Even if the synaptic structure of the digital vs. human brain approaches 1:1 similarity, the program running on it does not deserve ethical consideration unless and until consciousness can be demonstrated as an emergent property.

energy123

We should start by disambiguating intelligence and qualia. The field is trying to create intelligence, and kind of assuming that qualia won't be created alongside it.

falcor84

How would you go about disambiguating them? Isn't that literally the "hard problem of consciousness" [0]?

[0] https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

feoren

"Qualia" is a meaningless term made up so that philosophers can keep publishing meaningless papers. It's completely unfalsifiable: there is no test you can even theoretically run to determine the existence or nonexistence of qualia. There's never a reason to concern yourself with it.

drdeca

The test I use to determine that there exist qualia is “looking”. Now, whether there is a test I can do to confirm that there is any that anything(/anyone) other than me experiences, is another question. (I don’t see how there could be such a test, but perhaps I just don’t see it.)

So, probably not really falsifiable in the sense you are considering, yeah.

I don’t think that makes it meaningless, nor a worthless idea. It probably makes it not a scientific idea?

If you care about subjective experiences, it seems to make sense that you would then concern oneself with subjective experiences.

For the great lookup table Blockhead, whose memory banks take up a galaxy’s worth of space, storing a lookup table of responses for any possible partial conversation history with it, should we value not “hurting its feelings”? If not, why not? It responds just like how a person in an online one-on-one chat would.

Is “Is this [points at something] a moral patient?” a question amenable to scientific study? It doesn’t seem like it to me. How would you falsify answers of “yes” or “no”? But, I refuse to reject the question as “meaningless”.

layer8

The term has some validity as a word for what I take to be the inner perception of processes within the brain. The qualia of a scent, for example, can be taken to refer to the inner processing of scent perception giving rise to a secondary perception of that processing (or other side effects of that processing, like evoking associated memories). I strongly suspect that that’s what’s actually going on when people talk about how it feels like to see red, and the like.

balamatom

Except that philosophers can keep publishing meaningless papers regardless.

lo_zamoyski

Drinking from the eliminativist hose, are we?

You can't be serious. Whatever one wishes to say about the framing, you cannot deny conscious experience. Materialism painted itself into this corner through its bad assumptions. Pretending it hasn't produced this problem for itself, that it doesn't exist, is just plain silly.

Time to show some intellectual integrity and revisit those assumptions.

thinkingtoilet

I am certain this answer will change as generations pass. The current generations, us, will say that there is a difference. Once a generation of kids grow up with AI assistants/friends/partners/etc... they will have a different view. They will demand rights and protections for their AI.

russdill

Disagree. It would be like saying the more advanced transportation becomes, then more like a horse it will be.

thechao

Shining-brass 25 ton, coal-powered, steam-driven autohorse! 8 legs! Tireless! Breathes fire!

null

[deleted]

antithesizer

*shower thought

ge96

3) can we use a dead person's brain, hook up wires to it and oxygen, why not

geeunits

I've been building a 'neuromorphic' kernel/bare metal OS that operates on mac hardware using APL primitives as its core layer. Time is considered another 'position' and the kernel itself is vector oriented using 4d addressing with a 32x32x32 'neural substrate'.

I am so ready and eager for a paradigm shift of hardware & software. I think in the future 'software' will disappear for most people, and they'll simply ask and receive.

JimmyBuckets

I'd love to read more about this. Do you have a blog?

stefanv

And still no mention of Numenta… I’ve always felt it’s an underrated company, built on an even more underrated theory of intelligence

esafak

I want them to succeed but it's been two decades already. Maybe they should have started with a less challenging problem to grow the company?

meindnoch

They will be right on time when the first Mill CPU arrives!

kadushka

They pivoted to regular deep learning when Jeff stepped away from the company several years ago. It does not appear they're doing much of brain modeling these days. Last publication was 3 years ago.

Footpost

Neuromorphic computation has been hyped up for ~ 20 year by now. So far it has dramatically underperformed, at least vis-a-vis the hype.

The article does not distinguish between training and inference. Google Edge TPUs https://coral.ai/products/ each one is capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. So inference is already cheaper than the 20 watts the paper attributes to the brain. To be sure, LLM training is expensive, but so is raising a child for 20 years. Unlike the child, LLMs can share weights, and amortise the energy cost of training.

Another core problem with neuromorphic computation is that we currently have no meaningful idea how the brain produces intelligence, so it seems to be a bit premature to claim we can copy this mechanism. Here is what the Nvidia Chief Scientist B. Dally (and one of the main developers of modern GPU architectures) says about the subject: "I keep getting those calls from those people who claim they are doing neuromorphic computing and they claim there is something magical about it because it's the way that the brain works ... but it's truly more like building an airplane by putting feathers on it and flapping with the wings!" From "Hardware for Deep Learning" HotChips 2023 keynote. https://www.youtube.com/watch?v=rsxCZAE8QNA This is at 21:28. The whole talk is brilliant and worth watching.

ge96

Just searched against HN, seems this term is at least 8 years old

lukeinator42

The term neuromorphic? It was coined in 1990: https://ieeexplore.ieee.org/abstract/document/58356

newfocogi

Once again, I am quite surprised by the sudden uptick of AI content on HN coming out of LANL. Does anyone know if its just getting posted to HN and staying on the first page suddenly, or is this a change in strategy for the lab? Even so, I don't see the other NatLabs showing up like this.

fintler

Probably because they're hosting an exascale-class cluster with a bazillion GH200s. Also, they launched a new "National Security AI Office".

ivattano

The primary pool of money for DOE labs is through a program called "Frontiers in Artificial Intelligence for Science, Security and Technology (FASST)," replacing the Exascale Computing Project. Compared to other labs, LANL historically does not have many dedicated ML/AI groups but they have recently spun up an entire branch to help secure as much of that FASST money as possible.

gyrovagueGeist

I am not sure why HN has mostly LANL posts. Otherwise though it is a combination of things. Machine learning applications for NATSec & fundamental research have become more important (see FASST, proposed last year), the current political environment makes AI funding and applications more secure and easier to chase, and some of this is work that has already been going on but getting greater publicity for both of those reasons.

CamperBob2

I imagine the mood at the national labs right now is pretty panicky. They will be looking to get involved with more real-world applications than they traditionally have been, and will also want to appear more engaged with trendy technologies.

random3

memristors are back