Efficient Computer's Electron E1 CPU – 100x more efficient than Arm?
70 comments
·July 25, 2025pclmulqdq
rf15
I agree this is very "FPGA-shaped" and I wonder if they have further switching optimisations on hand.
RossBencina
My understanding is that they have a grid configuration cache, and are certainly trying to reduce the time/power cost of changing the grid connectivity.
pclmulqdq
An FPGA startup called Tabula had the same thesis and it didn't work out well for them. Their configurable blocks had 16 configurations that they would let you cycle through. Reportedly, the chips were hell to program and the default tools were terrible.
reactordev
Is that a design flaw or a tooling flaw? The dev experience is usually left till the very end of some proof like this.
gamache
Sounds a lot like GreenArray GA144 (https://www.greenarraychips.com/home/documents/greg/GA144.ht...)! Sadly, without a bizarre and proprietary FORTH dialect to call its own, I fear the E1 will not have the market traction of its predecessor.
jnpnj
That was my first thought too. I really like the idea of interconnected nodes array. There's something biological, thinking in topology and neighbours diffusion that I find appealing.
londons_explore
One day someone will get it working...
Data transfer is slow and power hungry - it's obvious that putting a little bit of compute next to every bit of memory is the way to minimize data transfer distance.
The laws of physics can't be broken, yet people demand more and more performance, so eventually the difficulty of solving this issue will be worth solving.
AnimalMuppet
That minimizes the data transfer distance from that bit of memory to that bit of compute. But it increases the distance between that bit of (memory and compute) and all the other bits of (memory and compute). If your problem is bigger than one bit of memory, such a configuration is probably a net loss, because of the increased data transfer distance between all the bits.
Your last paragraph... you're right that, sooner or later, something will have to give. There will be some scale such that, if you create clumps either larger or smaller than that scale, things will only get worse. (But that scale may be problem-dependent...) I agree that sooner or later we will have to do something about it.
Imustaskforhelp
Pardon me but could somebody here explain to me like I am 15? Because I guess Its late night and I can't go into another rabbithole and I guess I would appreciate it. Cheers and good night fellow HN users.
elseless
Sure. You can think of a (simple) traditional CPU as executing instructions in time, one-at-a-time[1] — it fetches an instruction, decodes it, performs an arithmetic/logical operation, or maybe a memory operation, and then the instruction is considered to be complete.
The Efficient architecture is a CGRA (coarse-grained reconfigurable array), which means that it executes instructions in space instead of time. At compile time, the Efficient compiler looks at a graph made up of all the “unrolled” instructions (and data) in the program, and decides how to map it all spatially onto the hardware units. Of course, the graph may not all fit onto the hardware at once, in which case it must also be split up to run in batches over time. But the key difference is that there’s this sort of spatial unrolling that goes on.
This means that a lot of the work of fetching and decoding instructions and data can be eliminated, which is good. However, it also means that the program must be mostly, if not completely, static, meaning there’s a very limited ability for data-dependent branching, looping, etc. to occur compared to a CPU. So even if the compiler claims to support C++/Rust/etc., it probably does not support, e.g., pointers or dynamically-allocated objects as we usually think of them.
[1] Most modern CPUs don’t actually execute instructions one-at-a-time — that’s just an abstraction to make programming them easier. Under the hood, even in a single-core CPU, there is all sorts of reordering and concurrent execution going on, mostly to hide the fact that memory is much slower to access than on-chip registers and caches.
pclmulqdq
Pointers and dynamic objects are probably fine given the ability to do indirect loads, which I assume they have (Side note: I have built b-trees on FPGAs before, and these kinds of data structures are smaller than you think). It's actually pure code size that is the problem here rather than specific capabilities, as long as the hardware supports those instructions.
Instead of assembly instructions taking time in these architectures, they take space. You will have a capacity of 1000-100000 instructions (including all the branches you might take), and then the chip is full. To get past that limit, you have to store state to RAM and then reconfigure the array to continue computing.
elseless
Agree that code size is a significant potential issue, and that going out to memory to reprogram the fabric will be costly.
Re: pointers, I should clarify that it’s not the indirection per se that causes problems — it’s the fact that, with (traditional) dynamic memory allocation, the data’s physical location isn’t known ahead of time. It could be cached nearby, or way off in main memory. That makes dataflow operator latencies unpredictable, so you either have to 1. leave a lot more slack in your schedule to tolerate misses, or 2. build some more-complicated logic into each CGRA core to handle the asynchronicity. And with 2., you run the risk that the small, lightweight CGRA slices will effectively just turn into CPU cores.
kannanvijayan
Hmm. You'd be able to trade off time for that space by using more general configurations that you can dynamically map instruction-sequences onto, no?
The mapping wouldn't be as efficient as a bespoke compilation, but it should be able to avoid the configuration swap-outs.
Basically a set of configurations that can be used as an interpreter.
markhahn
I think that footnote is close to the heart of it: on a modern OoO superscalar processor, there are hundreds of instructions in-flight. that means a lot of work done to maintain their state and ensure that they "fire" when their operands are satisfied. I think that's what this new system is about: a distributed, scalable dataflow-orchestration engine.
I think this still depends very much on the compiler: whether it can assemble "patches" of direct dependencies to put into each of the little processing units. the edges between patches are either high-latency operations (memory) or inter-patch links resulting from partitioning the overall dataflow graph. I suspect it's the NOC addressing that will be most interesting.
majkinetor
> meaning there’s a very limited ability for data-dependent branching, looping, etc. to occur compared to a CPU
Not very useful then if I can't do this very basic thing?
esperent
> it executes instructions in space instead of time. At compile time, the Efficient compiler looks at a graph made up of all the “unrolled” instructions (and data) in the program, and decides how to map it all spatially onto the hardware units.
Naively that sounds similar to a GPU. Is it?
Nevermark
Instead of large cores operating mostly independently in parallel (with some few standardized hardwired pipeline steps per core), …
You have many more very small ALU cores, configurable into longer custom pipelines with each step more or less as wide/parallel or narrow as it needs to be for each step.
Instead of streaming instructions over & over to large cores, you use them to set up those custom pipeline circuits, each running until it’s used up its data.
And you also have some opportunity for multiple such pipelines operating in parallel depending on how many operations (tiles) each pipeline needs.
hencoappel
Found this video a good explanation. https://youtu.be/xuUM84dvxcY?si=VPBEsu8wz70vWbX4
Tempest1981
Thanks. (Why does he keep pointing at me?)
wmf
Probably not. This is graduate-level computer architecture.
archipelago123
It's a dataflow architecture. I assume the hardware implementation is very similar to what is described here: https://csg.csail.mit.edu/pubs/memos/Memo-229/Memo-229.pdf. The problem is that it becomes difficult to exploit data locality, and there is so much optimization you can perform during compile time. Also, the motivation for these types of architectures (e.g. lack of ILP in Von-Neumann style architectures) are non-existent in modern OoO cores.
timschmidt
Out of order cores spend an order of magnitude more logic and energy than in-order cores handling invalidation, pipeline flushes, branch prediction, etc etc etc... All with the goal of increasing performance. This architecture is attempting to lower the joules / instruction at the cost of performance, not increase energy use in exchange for performance.
pedalpete
Though I'm sure this is valuable in certain instances, thinking about many embedded designs today, is the CPU/micro really the energy hog in these systems?
We're building an EEG headband with bone-conduction speaker so in order of power, our speaker/sounder and LEDs are orders of magnitude more expensive than our microcontroller.
In anything with a screen, that screen is going to suck all the juice, then your radios, etc. etc.
I'm sure there are very specific use-cases that a more energy efficient CPU will make a difference, but I struggle to think of anything that has a human interface where the CPU is the bottleneck, though I could be completely wrong.
schobi
I would not expect that this becomes competitive against a low power controller that is sleeping most of the time, like in a typical wristwatch wearable.
However, the examples indicate that if you have a loop that is executed over and over, the setup cost for configuring the fabric could be worth doing. Like a continuous audio stream in a wakeup-word detection, a hearing aid, or continous signals from an EEG.
Instead of running a general purpose cpu at 1MHz the fabric would be used to unroll the loop, you will use (up to) 100 building blocks for all individual operations. Instead of one instruction after another, you have a pipeline that can execute one operation in each cycle in each building block. The compute thus only needs to run at 1/100 clock, e. g. the 10kHz sampling rate of the incoming data. Each tick of the clock moves data through the pipeline, one step at a time.
I have no insights but can imagine how marketing thinks: "let's build a 10x10 grid of building blocks, if they are all used, the clock can be 1/100... Boom - claim up to 100x more efficient!" I hope their savings estimate is more elaborate though...
montymintypie
Human interfaces, sure, but there's a good chunk of industrial sensing IoT that might do some non-trivial edge processing to decide if firing up the radio is even worth it. I can see this being useful there. Potentially also in smart watches with low power LCD/epaper displays, where the processor starts to become more visible in power charts.
Wonder if it could also be a coprocessor, if the fabric has a limited cell count? Do your dsp work on the optimised chip and hand off the the expensive radio softdevice when your codesize is known to be large.
kendalf89
This grid based architecture reminds me of a programming game from zactronics, TIS-100.
mcphage
I thought the same thing :-)
gchadwick
> The interconnect between tiles is also statically routed and bufferless, decided at compile time. As there's no flow control or retry logic, if two data paths would normally collide, the compiler has to resolve it at compile time.
This sounds like the most troublesome part of the design to me. It's very hard to do this static scheduling well. You can end having to hold up everything waiting for some tiny thing to complete so you can proceed forward in lock step. You'll also have situations where 95% of the time the static scheduling can work but 5% of cases where something fiddly happens. Without any ability for dynamic behaviour and data movement small corner cases dominate how the rest of the system behaves.
Interestingly you see this very problem in hardware design! All paths between logic gates need to be some maximum length to reach a target clock frequency. Often you get long fiddly paths relating to corner cases in behaviour that require significant manual effort to resolve and achieve timing closure.
regularfry
Was I misreading, or is this thing not essentially unclocked? There have been asynchronous designs in the past (of ARM6 cores, no less) but they've not taken the world by storm.
ZiiS
Percentage chance this is 100X more efficent at the general purpose computing ARM is optimized for: 1/100%
Grosvenor
Is this the return if Itanium? static scheduling and pushing everything to the compiler it sounds like it.
wood_spirit
The Mill videos are worth watching again - there are variations on NaT handling and looping and branching etc that make DSPs much more general-purpose.
I don’t know how similar this Electron is, but the Mill explained how it could be done.
Edit: aha, found them! https://m.youtube.com/playlist?list=PLFls3Q5bBInj_FfNLrV7gGd...
smlacy
I love these videos and his enthusiasm for the problem space. Unfortunately, it seems to me that the progress/ideas have floundered because of concerns around monetizing intellectual property, which is a shame. If he had gone down a more RISC-V like route, I wonder if we would see more real-world prototypes and actual use cases. This type of thing seems great for microprocessor workloads.
darksaints
It kinda sounds like it, though the article explicitly said it's not VLIW.
I've always felt like itanium was a great idea but came too soon and too poorly executed. It seemed like the majority of the commercial failure came down to friction from switching architecture and the inane pricing rather than the merits of the architecture itself. Basically intel being intel.
bri3d
I disagree; Itanium was fundamentally flawed for general purpose computing and especially time-shared generally purpose computing. VLIW is not practical in time-sharing systems without completely rethinking the way cache works, and Itanium didn't really do that.
As soon as a system has variable instruction latency, VLIW completely stops working; the entire concept is predicated on the compiler knowing how many cycles each instruction will take to retire ahead of time. With memory access hierarchy and a nondeterministic workload, the system inherently cannot know how many cycles an instruction will take to retire because it doesn't know what tier of memory its data dependencies live in up front.
The advantage of out-of-order execution is that it dynamically adapts to data availability.
This is also why VLIW works well where data availability is _not_ dynamic, for example in DSP applications.
As for this Electron thing, the linked article is too puffed to tell what it's actually doing. The first paragraph says something about "no caches" but the block diagram has a bunch of caches in it. It sort of sounds like an FPGA with bigger primitives (configurable instruction tiles rather than gates), which means that synchronization is going to continue to be the problem and I don't know how they'll solve for variable latency.
hawflakes
Not to detract form your point, but Itanium's design was to address the code compatibility between generations. You could have code optimized for a wider chip run on a narrower chip because of the stop bits. The compiler still needs to know how to schedule to optimize for a specific microarchitecture but the code would still run albeit not as efficiently.
As an aside, I never looked into the perf numbers but having adjustable register windows while cool probably made for terrible context switching and/or spilling performance.
als0
> VLIW is not practical in time-sharing systems without completely rethinking the way cache works
Just curious as to how you would rethink the design of caches to solve this problem. Would you need a dedicated cache per execution context?
bobmcnamara
Itanic did exactly what it was supposed to do - kill off most of the RISCs.
markhahn
haha! very droll.
cmrdporcupine
It does feel maybe like the world has changed a bit now that LLVM is ubiquitous with its intermediate representation form being available for specialized purposes. Translation from IR to a VLIW plan should be easier now than the state of compiler tech in the 90s.
But "this is a good idea just poorly executed" seems to be the perennial curse of VLIW, and how Itanium ended up shoved onto people in the first place.
mochomocha
On the other hand, Groq seems pretty successful.
rpiguy
The architecture diagram in the article resembles the approach Apple took in the design of their neural engine.
https://www.patentlyapple.com/2021/04/apple-reveals-a-multi-...
Typically these architectures are great for compute. How will it do on scalar tasks with a lot of branching? I doubt well.
variadix
Pretty interesting concept, though as other commenters have pointed out the efficiency gains likely break down once your program doesn’t fit onto the mesh all at once. Also this looks like it requires a “sufficiently smart compiler”, which isn’t a good sign either. The need to do routing etc. reminds me of the problems FPGAs have during place and route (effectively the minimum cut problem on a graph, i.e. NP), hopefully compilation doesn’t take as long as FPGA synthesis takes.
kyboren
> The need to do routing etc. reminds me of the problems FPGAs have during place and route (effectively the minimum cut problem on a graph, i.e. NP)
I'd like to take this opportunity to plug the FlowMap paper, which describes the polynomial-time delay-optimal FPGA LUT-mapping algorithm that cemented Jason Cong's 31337 reputation: https://limsk.ece.gatech.edu/book/papers/flowmap.pdf
Very few people even thought that optimal depth LUT mapping would be in P. Then, like manna from heaven, this paper dropped... It's well worth a read.
almostgotcaught
I don't what this has to do with what you're responding to - tech mapping and routing are two completely different things and routing is known NP complete.
icandoit
I wondered if this was using interaction combinators like the vine programming language does.
I haven't read much that explains how they do it.
I have been very slowly trying to build a translation layer between starlark and vine as a proof of concept of massively parallel computing. If someone better qualified finds a better solution the market it sure to have demand for you. A translation layer is bound to be cheaper than teaching devs to write in jax or triton or whatever comes next.
This is a CGRA. It's like an FPGA but with bigger cells. It's not a VLIW core.
I assume that like all past attempts at this, it's about 20x more efficient when code fits in the one array (FPGAs get this ratio), but if your code size grows past something very trivial, the grid config needs to switch and that costs tons of time and power.