Skip to content(if available)orjump to list(if available)

SpikingBrain 7B – More efficient than classic LLMs

cpldcpu

>The current implementation adopts pseudo-spiking, where activations are approximated as spike-like signals at the tensor level, rather than true asynchronous event-driven spiking on neuromorphic hardware.

Isn't that in essence very similar to Quantization Aware Training (QaT)?

spwa4

Can you explain more? Why would that be the case? What is being passed from one layer to the next is not a linear value but the delay until the next spike, which is very different.

asdfasdf1

SpikingBrain Technical Report: Spiking Brain-inspired Large Models https://arxiv.org/abs/2509.05276

bob1029

cpldcpu

Well, it would still allow to deploy the trained model to SNN hardware, if it existed.