Skip to content(if available)orjump to list(if available)

Reversible computing escapes the lab

Reversible computing escapes the lab

52 comments

·January 10, 2025

colanderman

Nice, these ideas have been around for a long time but never commercialized to my knowledge. I've done some experiments in this area with simulations and am currently designing some test circuitry to be fabbed via Tiny Tapeout.

Reversibility isn't actually necessary for most of the energy savings. It saves you an extra maybe 20% beyond what adiabatic techniques can do on their own. Reason being, the energy of the information itself pales in comparison to the resistive losses which dominate the losses in adiabatic circuits, and it's actually a (device-dependent) portion of these resistive losses which the reversible aspect helps to recover, not the energy of information itself.

I'm curious why Frank chose to go with a resonance-based power-clock, instead of a switched-capacitor design. In my experience the latter are nearly as efficient (losses are still dominated by resistive losses in the powered circuit itself), and are more flexible as they don't need to be tuned to the resonance of the device. (Not to mention they don't need an inductor.) My guess would be that, despite requiring an on-die inductor, the overall chip area required is much less than that of a switched-capacitor design. (You only need one circuit's worth of capacitance, vs. 3 or more for a switched design, which quadruples your die size....)

I'm actually somewhat skeptical of the 4000x claim though. Adiabatic circuits can typically only provide about a single order of magnitude power savings over traditional CMOS -- they still have resistive losses, they just follow a slightly different equation (f²RC²V², vs. fCV²). But RC and C are figures of merit for a given silicon process, and fRC (a dimensionless figure) is constrained by the operational principles of digital logic to the order of 0.1, which in turn constrains the power savings to that order of magnitude regardless of process. Where you can find excess savings though is simply by reducing operating frequency. Adiabatic circuits benefit more from this than traditional CMOS. Which is great if you're building something like a GPU which can trade clock frequency for core count.

mikepfrank

Hi, someone pointed me at your comment, so I thought I'd reply.

First, the circuit techniques that aren't reversible aren't truly, fully adiabatic either -- they're only quasi-adiabatic. In fact, if you strictly follow the switching rules required for fully adiabatic operation, then (ignoring leakage) you cannot erase information -- none of the allowed operations achieve that.

Second, to say reversible operation "only saves an extra 20%" over quasi-adiabatic techniques is misleading. Suppose a given quasi-adiabatic technique saves 79% of the energy, and a fully adiabatic, reversible version saves you "an extra 20%" -- well, then now that's 99%. But, if you're dissipating 1% of the energy of a conventional circuit, and the quasi-adiabatic technique is dissipating 21%, that's 21x more energy efficient! And so you can achieve 21x greater performance within a given power budget.

Next, to say "resistive losses dominate the losses" is also misleading. The resistive losses scale down arbitrarily as the transition time is increased. We can actually operate adiabatic circuits all the way down to the regime where resistive losses are about as low as the losses due to leakage. The max energy savings factor is on the order of the square root of the on/off ratio of the devices.

Regarding "adiabatic circuits can typically only provide an order of magnitude power savings" -- this isn't true for reversible CMOS! Also, "power" is not even the right number to look at -- you want to look at power per unit performance, or in other words energy per operation. Reducing operating frequency reduces the power of conventional CMOS, but does not directly reduce energy per operation or improve energy efficiency. (It can allow you to indirectly reduce it though, by using a lower switching voltage.)

You are correct that adiabatic circuits can benefit from frequency scaling more than traditional CMOS -- since lowering the frequency actually directly lowers energy dissipation per operation in adiabatic circuits. The specific 4000x number (which includes some benefits from scaling) comes from the analysis outlined in this talk -- see links below - but we have also confirmed energy savings of about this magnitude in detailed (Cadence/Spectre) simulations of test circuits in various processes. Of course, in practice the energy savings is limited by the resonator Q value. And a switched-capacitor design (like a stepped voltage supply) would do much worse, due to the energy required to control the switches.

https://www.sandia.gov/app/uploads/sites/210/2023/11/Comet23... https://www.youtube.com/watch?v=vALCJJs9Dtw

Happy to answer any questions.

colanderman

Thanks for the reply, was actually hoping you'd pop over here.

I don't think we actually disagree on anything. Yes, without reverse circuits you are limited to quasi-adiabatic operaton. But, at least in the architectures I'm familiar with (mainly PFAL), most of the losses are unarguably resistive. As I understand PFAL, it's only when the operating voltage of a given gate drops below Vth that the (macro) information gets lost and reversibility provides benefit, which is only a fraction of the switching cycle. At least for PFAL the figure is somewhere in the 20% range IIRC. (I say "macro" because of course the true energy of information is much smaller than the amounts we're talking about.)

The "20%" in my comment I meant in the multiplicative sense, not additive. I.e. going from 79% savings to 83.2%, not 99%. (I realize that wasn't clear.)

What I find interesting is reversibility isn't actually necessary for true adiabatic operation. All that matters is the information of where charge needs to be recovered from can be derived somehow. This could come from information available elsewhere in the circuit, not necessarily the subsequent computations reversed. (Thankfully, quantum non-duplication does not apply here!)

I agree that energy per operation is often more meaningful, BUT one must not lose sight of the lower bounds on clock speed imposed by a particular workload.

Ah thanks for the insight into the resonator/switched-cap tradeoff. Yes, capacitative switching designs which are themselves adiabatic I know is a bit of a research topic. In my experience the losses aren't comparable to the resistive losses of the adiabatic circuitry itself though. (I've done SPICE simulations using the sky130 process.)

mikepfrank

It's been a while since I looked at it, but I believe PFAL is one of the not-fully-adiabatic techniques that I have a lot of critiques of.

There have been studies showing that a truly, fully adiabatic technique in the sense I'm talking about (2LAL was the one they checked) does about 10x better than any of the other "adiabatic" techniques. In particular, 2LAL does a lot better than PFAL.

> reversibility isn't actually necessary

That isn't true in the sense of "reversible" that I use. Look at the structure of the word -- reverse-able. Able to be reversed. It isn't essential that the very same computation that computed some given data is actually applied in reverse, only that no information is obliviously discarded, implying that the computation always could be reversed. Unwanted information still needs to be decomputed, but in general, it's quite possible to de-compute garbage data using a different process than the reverse of the process that computed it. In fact, this is frequently done in practice in typical pipelined reversible logic styles. But they still count as reversible even though the forwards and reverse computations aren't identical. So, I think we agree here and it's just a question of terminology.

Lower bounds on clock speed are indeed important; generally this arises in the form of maximum latency constraints. Fortunately, many workloads today (such as AI) are limited more by bandwidth/throughput than by latency.

I'd be interested to know if you can get energy savings factors on the order of 100x or 1000x with the capacitive switching techniques you're looking at. So far, I haven't seen that that's possible. Of course, we have a long way to go to prove out those kinds of numbers in practice using resonant charge transfer as well. Cheers...

itissid

Can one define the process of an adiabetic circuit goes through like one would do analogusly for the carnot engine? The idea being coming up with a theoretical cieling for the efficiency of such a circuit in terms of circuit parameters?

colanderman

Yes a similar analysis is where the above expression f²RC²V² comes from.

Essentially -- (and I'm probably missing a factor of 2 or 3 somewhere as I'm on my phone and don't have reference materials) -- in an adiabatic circuit the unavoidable power loss for any individual transistor stems from current (I) flowing through that transistor's channel (a resistor R) on its way to and from another transistor's gate (a capacitor C). So that's I²R unavoidable power dissipation.

I must be sufficient to fill and then discharge the capacitor to/from operating voltage (V) in the time of one cycle (1/f). So I=2fCV. Substituting this gives 4f²RC²V².

Compare to traditional CMOS, wherein the gate capacitance C is charged through R from a voltage source V. It can be shown that this dissipates ½CV² of energy though the resistor in the process, and the capacitor is filled with an equal amount of energy. Discharging then dissipates this energy through the same resistor. Repeat this every cycle for a total power usage of fCV².

Divide these two figures and we find that adiabatic circuits use 4fRC times as much energy as traditional CMOS. However, f must be less than about 1/(5RC) for a CMOS circuit to function at all (else the capacitors don't charge sufficiently during a cycle) so this is always power savings in favor of adiabatics. And notably, decreasing f of an adiabatic circuit from the maximum permissible for CMOS on the same process increases the efficiency gain proportionally.

(N.B., I feel like I missed a factor of 2 somewhere as this analysis differs slightly from my memory. I'll return with corrections if I find an error.)

pfdietz

Maybe this would work better with superconducting electronics?

mikepfrank

There indeed has been research on reversible adiabatic logic in superconducting electronics. But superconducting electronics has a whole host of issues of its own, such as low density and a requirement for ultra-low temperatures.

When I was at Sandia we also had a project exploring ballistic reversible computation (as opposed to adiabatic) in superconducting electronics. We got as far as confirming to our satisfaction that it is possible, but this line of work is a lot farther from major commercial applications than the adiabatic CMOS work.

colanderman

Possibly, that's an interesting thought. The main benefit of adiabatics as I see them is that, all else being equal, a process improvement of the RC figure can be used to enable either an increase in operating frequency or a decrease in power usage (this is reflected as the additional factor of fRC in the power equation). With traditional CMOS, this only can benefit operating frequency -- power usage is independent of the RC product per se. Supercondition (or near-superconduction) is essentially a huge improvement in RC which wouldn't be able to be realized as an increase in operating frequency due to speed of light limitations, so adiabatics would see an outsize benefit in that case.

PaulHoule

Notably the physical limit is

https://en.wikipedia.org/wiki/Landauer%27s_principle

it doesn't necessarily take any energy at all to process information, but it does take roughly kT work of energy to erase a bit of information. It's related to

https://en.wikipedia.org/wiki/Maxwell%27s_demon

as, to complete cycles, the demon has to clear its memory.

Y_Y

Does it not take energy to process information? Can any computable function be computed with arbitrarily low energy input/entropy increase?

colanderman

No, and yes, so long as you don't delete information.

Think of a marble-based computer, whose inner workings are frictionless and massless. The marbles roll freely without losing energy unless they are forced to stop somehow, but computation is nonetheless performed.

Y_Y

I don't know how to compute with marbles without mass and stopping. Marble computers I've seen rely on gravity and friction, though I'd love to see one that didn't.

entaloneralie

Henry G. Baker wrote this paper titled "The Thermodynamics of Garbage Collection" in the 90s about linear logic, stack machines, reversibility and the cost of erasing information:

https://wiki.xxiivv.com/docs/baker_thermodynamics.html

A subset of FRACTRAN programs are reversible, and I would love to see rewriting computers as a potential avenue for reversible circuit building(similar to the STARAN cpu):

https://wiki.xxiivv.com/site/fractran.html#reversibility

siver_john

This is really cool, I never expected to see reversible computation made in electrical systems. I learned about it undergrad taking a course by Bruce MacLennan* though it was more applied to "billiard ball" or quantum computing. It was such a cool class though.

*Seems like he finally published the text book he was working on when teaching the class: [https://www.amazon.com/dp/B0BYR86GP7?ref_=pe_3052080_3975148...

stevage

Wow. This whole logic sounds like something really harebrained from a Dr Who episode: "It takes energy to destroy information. Therefore if you don't destroy information, it doesn't take energy!" - sounds completely illogical.

I honestly don't understand from the article how you "recover energy". Yet I have no reason to disbelieve it.

kibwen

Someone else here compared it to regenerative braking in cars, which is what made it click for me. If you spend energy to accelerate, then recapture that energy while decelerating, then you can manage to transport yourself while your net energy expenditure is zero (other than all that pesky friction). On the other hand, if you spend energy to accelerate, then shed all that energy via heat from your brake pads, then you need to expend new energy to accelerate next time.

amelius

> The main way to reduce unnecessary heat generation in transistor use—to operate them adiabatically—is to ramp the control voltage slowly instead of jumping it up or down abruptly.

But if you change the gate voltage slowly, then the transistor will be for a longer period in the resistive region where it dissipates energy. Shouldn't you go between the OFF and ON states as quickly as possible?

colanderman

The trick is not to have a voltage across the channel while it's transitioning states. For this reason, adiabatic circuits are typically "phased" such that any given adiabatic logic gate is either having its gates charged or discharged (by the previous logic gate), or current is passing through its channels to charge/discharge the next logic gate.

amelius

Interesting, thanks!

pama

The ideas are neat and both Landauer and Bennet did some great work and left a powerful legacy. The energetic limits we are talking about are not yet relevant in modern computers. The amount of excess thermal energy for performing 10^26 erasures associated to some computation (of say an LLM that would be too powerful for the current presidential orders) would only be about 0.1kWh, so 10 minutes of a single modern GPU. There are other advantages to reversibility, of course, and maybe one day even that tiny amount of energy savings will matter.

leoc

DonHopkins

As well as Tommaso Toffoli, Norman Margolus, Tom Knight, Richard Feynman, and Charles Bennett:

Reversible Computing, Tommaso Toffoli:

https://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TM-1...

>Abstract. The theory of reversible computing is based on invertible primitives and composition rules that preserve invertibility. With these constraints, one can still satisfactorily deal with both functional and structural aspects of computing processes; at the same time, one attains a closer correspondence between the behavior of abstract computing systems and the microscopic physical laws (which are presumed to be strictly reversible) that underly any concrete implementation of such systems. According to a physical interpretation, the central result of this paper is that it is ideally possible to build sequential circuits with zero internal power dissipation.

A Scalable Reversible Computer in Silicon:

https://www.researchgate.net/publication/2507539_A_Scalable_...

Reversible computing:

https://web.eecs.utk.edu/~bmaclenn/Classes/494-594-UC-F17/ha...

>In 1970s, Ed Fredkin, Tommaso Toffoli, and others at MIT formed the Information Mechanics group to the study the physics of information. As we will see, Fredkin and Toffoli described computation with idealized, perfectly elastic balls reflecting o↵ barriers. The balls have minimum dissipation and are propelled by (conserved) momentum. The model is unrealistic but illustrates many ideas of reversible computing. Later we will look at it briefly (Sec. C.7).

>They also suggested a more realistic implementation involving “charge packets bouncing around along inductive paths between capacitors.” Richard Feynman (Caltech) had been interacting with Information Mechanics group, and developed “a full quantum model of a serial reversible computer” (Feynman, 1986).

>Charles Bennett (1973) (IBM) first showed how any computation could be embedded in an equivalent reversible computation. Rather than discarding information (and hence dissipating energy), it keeps it around so it can later “decompute” it back to its initial state. This was a theoretical proof based on Turing machines, and did not address the issue of physical implementation. [...]

>How universal is the Toffoli gate for classical reversible computing:

https://quantumcomputing.stackexchange.com/questions/21064/h...

EncomLab

Calling the addition of an energy storage device into a transistor "reverse computing" is like calling a hybrid car using regenerative braking "reverse driving".

It's a very interesting concept - best discussed over pints at the pub on a Sunday afternoon along with over unity devices and the sad lack of adoption of bubble memory.

IIAOPSW

Well actually, "reversible driving" is perfectly apt in the sense of acceleration being a reversible process. It means that in theory the net energy needed to drive anywhere is zero because all the energy spent on acceleration is gained back on braking. Yes I know in practice there's always friction loss, but the point is there isn't a theoretical minimum amount of friction that has to be there. In principle a car with reversible driving can get anywhere with asymptotically close to zero energy spent.

Put another way, there is no way around the fact that a "non-reversible car" has to have friction loss because the brakes work on friction. But there is no theoretical limit to how far you can reduce friction in reversible driving.

nine_k

Cars specifically dissipate energy on deformation of the tires; this loss is irreversible at any speed, even if all the bearings have effectively zero losses (e.g. using magnetic levitation).

A train spends much less on that because the rails and the wheels are very firm. A maglev train likely recuperates nearly 100% of its kinetic energy during deceleration, less the aerodynamic losses; it's like a superconducting reversible circuit.

immibis

Actually, a non-reversible car also has no lower energy limit, as long as you drive on a flat surface (same for a reversible one) and can get to the answer arbitrarily slowly.

An ideal reversible computer also works arbitrarily slowly. To make it go faster, you need to put energy in. You can make it go arbitrarily slowly with arbitrarily little energy, just like a non-reversible car.

EncomLab

This is glorious.

colanderman

The reverse computing is independent of the energy storage mechanism. It's used to "remember" how to route the energy for recovery.

psd1

A pub in Cambridge, perhaps! I doubt you'd overhear such talk in some Aldershot dive.

The Falling Edge, maybe? The Doped Wafer?

082349872349872

The Flipped Bit? The Reversed Desrevereht?

(I once read a fiction story about someone who, instead of having perfect pitch, had perfect winding number: he couldn't get to sleep before returning to zero, so it took him some time to realise that when other people talked about "unwinding" at the end of the day, they didn't mean it literally)

perching_aix

Sounds like a good time :)

yalogin

The concept completely flummoxed me but how does this play with quantum computers? That’s the direction we are going aren’t we?

fallingfrog

Quantum computations have to be reversible , because you have to collapse the wave function and take a measurement to throw away any bits of data. You can accumulate junk bits as long as they remain in a superposition. But at some point you have to take a measurement. So, very much related.

EncomLab

The miniscule amount of energy retained from the "reverse computation" will be absolutely demolished by the first DRAM refresh.

fintler

I doubt it would use DRAM. Maybe some sort of MRAM/FeRAM would be a better fit. Or maybe a tiny amount of memory (e.g. Josephson junction) in a quantum circuit at some point in the future.

colanderman

SRAM is actually very architecturally similar to some adiabatic circuit topologies.

DonHopkins

Reversible Computing (2016) [video] (youtube.com)

https://news.ycombinator.com/item?id=16007128

https://www.youtube.com/watch?v=rVmZTGeIwnc

DonHopkins on Dec 26, 2017 | next [–]

Billiard Ball cellular automata, proposed and studied by Edward Fredkin and Tommaso Toffoli, are one interesting type of reversible computer. The Ising spin model of ferromagnetism is another reversible cellular automata technique. https://en.wikipedia.org/wiki/Billiard-ball_computer

https://en.wikipedia.org/wiki/Reversible_cellular_automaton

https://en.wikipedia.org/wiki/Ising_model

If billiard balls aren't creepy enough for you, live soldier crabs of the species Mictyris guinotae can be used in place of the billiard balls.

https://www.newscientist.com/blogs/onepercent/2012/04/resear...

https://www.wired.com/2012/04/soldier-crabs/

http://www.complex-systems.com/abstracts/v20_i02_a02.html

Robust Soldier Crab Ball Gate

Yukio-Pegio Gunji, Yuta Nishiyama. Department of Earth and Planetary Sciences, Kobe University, Kobe 657-8501, Japan.

Andrew Adamatzky. Unconventional Computing Centre. University of the West of England, Bristol, United Kingdom.

Abstract

Soldier crabs Mictyris guinotae exhibit pronounced swarming behavior. Swarms of the crabs are tolerant of perturbations. In computer models and laboratory experiments we demonstrate that swarms of soldier crabs can implement logical gates when placed in a geometrically constrained environment.