Future Chips Will Be Hotter Than Ever
93 comments
·April 16, 2025trehalose
enragedcacti
I think you're right that 1µm was meant given the orders of magnitude in other sources e.g. 200µm -> 0.3µm in this white paper:
https://www.cadence.com/en_US/home/resources/white-papers/th...
jackyinger
Wafers on some semiconductor processes are 0.3m in diameter. You could not practically handle a 1um thick wafer 0.3m in diameter without shattering it. 0.75mm is a reasonable overall wafer thickness.
Workaccount2
Whose gonna pull the trigger on beryllium oxide mounting packages first?
Its the holy grail of having thermal conductivity somewhere between aluminum and copper, while being as electrically insulating as ceramic. You can put the silicon die directly on it.
Problem is that the dust from it is terrifyingly toxic, but in it's finished form it's "safe to handle".
mppm
> Whose gonna pull the trigger on beryllium oxide mounting packages first?
Nobody, presumably :)
Why mess with BeO when there is AlN, with higher thermal conductivity, no supply limitations and no toxicity?
Edit: I've just checked, practically available AlN substrates still seem to lag behind BeO in terms of thermal conductivity.
mjevans
https://en.wikipedia.org/wiki/Aluminium_nitride For anyone else who wasn't familiar with the compound.
""" Aluminium nitride (AlN) is a solid nitride of aluminium. It has a high thermal conductivity of up to 321 W/(m·K)[5] and is an electrical insulator. Its wurtzite phase (w-AlN) has a band gap of ~6 eV at room temperature and has a potential application in optoelectronics operating at deep ultraviolet frequencies.
...
Manufacture
AlN is synthesized by the carbothermal reduction of aluminium oxide in the presence of gaseous nitrogen or ammonia or by direct nitridation of aluminium.[22] The use of sintering aids, such as Y2O3 or CaO, and hot pressing is required to produce a dense technical-grade material.[citation needed] Applications
Epitaxially grown thin film crystalline aluminium nitride is used for surface acoustic wave sensors (SAWs) deposited on silicon wafers because of AlN's piezoelectric properties. Recent advancements in material science have permitted the deposition of piezoelectric AlN films on polymeric substrates, thus enabling the development of flexible SAW devices.[23] One application is an RF filter, widely used in mobile phones,[24] which is called a thin-film bulk acoustic resonator (FBAR). This is a MEMS device that uses aluminium nitride sandwiched between two metal layers.[25] """
Speculation: it's present use suggests that at commercially viable quantities it might be challenging to use as a thermal interface compound. I've also never previously considered the capacitive properties of packaging components and realize of course that's required. Use of Al O as a heat conductor is so far outside of my expertise...
Could a materials expert elaborate how viable / expensive this compound is for the rest of us?
mppm
I'm not much of an expert, but maybe this can be useful: AlN is a somewhat widely used insulating substrate that is chosen where sapphire is insufficient (~40 W/mK), but BeO (~300 W/mK) is too expensive or toxic. The intrinsic conductivity of single-crystal AlN is very high (~320 W/mK), but the material is extremely difficult to grow into large single crystals, so sintered substrates are used instead. This reduces thermal conductivity to 170-230 W/mK depending on grade. Can't comment on pricing though.
wiml
I think diamond is even more thermally conductive than either. A quick google finds a number of companies working on silicon-on-diamond.
adrian_b
Most packages with beryllium oxide have been abandoned long ago, beryllia being replaced with aluminum nitride.
Because aluminum nitride is not as good as beryllia, packages with beryllia have survived for some special applications, like military, aerospace or transistors for high-power radio transmitters.
Those packages are not dangerous, unless someone attempts to grind them, but their high price (caused by the difficult manufacturing techniques required to avoid health risks, and also by the rarity of beryllium) discourages their use in any other domains.
pitaj
> Problem is that the dust from it is terrifyingly toxic, but in it's finished form it's "safe to handle".
Doesn't that mean it would be problematic for electronics recycling?
pelagicAustral
I don't think toxicity levels on compounds used in electronics has even been a stopper for furthering humanity
catlikesshrimp
I know it is an hyperbole. First thing I thought was: Cadmium, Mercury, Lead and CFC. I was slightly annoyed about Cd and Hg
ulrikrasmussen
Or getting berylliosis from putting a drill through your electronic device before throwing it out
giantg2
Won't you have conductivity issues if the oxide layer is damaged?
mjevans
The article mentions backside (underside) power distribution, capacitors to help regulate voltage (thus allowing tighter tolerances and lower voltage / operating power), voltage regulation under the chip, and finally dual-layer stacking with the above as potential avenues to spread heat dissipation.
I can't help but wonder, where exactly is that heat supposed to go on the underside of the chip? Modern CPUs practical float atop a bed of nails.
berbec
a second heatsink mounted the back of the chip? maybe the socket the chip in suck a way the back touches a copper plate attached to some heatpipes? plenty of options
BizarroLand
I mean, there's no real reason a chip has to be a wafer.
A toroidal shape would allow more interconnects to be interspaced throughout the design as well as more heat-transfer points alongside the data transfer interconnects.
Something like chiplet design where each logical section is a complete core or even an SOC with a robust interconnect to the next and previous section.
If that were feasible, you could build it onto a hollow tube structure so that heat could be piped out from all sides once you sandwich the chip in a wraparound cooler.
I guess the idea is more scifi than anything, though. I doubt anyone other than ARM or RISC-V would ever even consider the idea until some other competitor proves the value.
mikewarot
We could also explore the idea that Von Neumann's architecture isn't the best choice the future. Having trillions of transistors just waiting their turn to hand off data as fast as possible doesn't seem same to me.
esseph
What's your solution then?
mikewarot
Start with an FPGA, they're optimized for performance, but too optimized, and very hard to program.
Rip out all the special purpose bits that make it non-uniform, and thus hard to route.
Rip out all of the long lines and switching fabric that optimizes for delays, and replace it all with only short lines to the neighboring cells. This greatly reduces switching energy.
Also have the data needed for every compute step already loaded into the cells, eliminating the memory/compute bottleneck.
Then add a latch on every cell, so that you can eliminate race conditions, and the need to worry about timing down to the picosecond.
This results in a uniform grid of Look Up Tables (LUTS) that get clocked in 2 phases, like the colors of the chessboard. Each cell thus has stable inputs, as they all come from the other phase, which is latched.
I call it BitGrid.
I'd give it a 50/50 chance of working out in the real world. If it does, it'll mean cheap PetaFlops for everyone.
7bit
You should be working for Intel!
UltraSane
programming for anything than the Von Neumann architecture is very hard.
Legend2440
Generally true.
But neural networks are non-Von Neumann, and we 'program' them using backprop. This can also be applied to cellular automata.
pfdietz
One game that can be played is to use isotopically pure Si-28 in place of natural silicon. The thermal conductivity of Si-28 is 10% higher than natural Si at room temperature (but 8x higher at 26 K).
chasil
How difficult is the purification process? Is it as difficult as uranium hexafloride gas?
Yes, gas centrifuge appears to be a leading method.
'The purification starts with “simple” isotopic purification of silicon. The major breakthrough was converting this Si to silane (SiH4), which is then further refined to remove other impurities. The ultra-pure silane can then be fed into a standard epitaxy machine for deposition onto a 300-mm wafer.'
https://www.eejournal.com/article/silicon-purification-for-q...
rbanffy
Doesn’t silane like catching fire when it sees an oxygen molecule? The other day I heard about it being used as rocket fuel for lunar ISRU applications.
A rocket and a sandblaster at the same time.
philipkglass
This is no worse than before. All electronic grade silicon is already produced starting from silane or trichlorosilane, and both are about equally hazardous to handle. See this overview of producing purified silicon:
"Chemistry of the Main Group Elements - 7.10: Semiconductor Grade Silicon"
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/...
HappyPanacea
How much does it costs to manufacture? Are there any other benefits from using isotopically pure Si-28? Are there any other isotopes used in common thermal conductive material that are more conductive?
pfdietz
The point of improving the thermal conductivity of silicon is that silicon is what chips are made of instead of, say, diamond.
Of course cost would have to be acceptable.
HappyPanacea
I was thinking more about isotopes of copper than carbon but I can't find data about thermal conductivity of isotopically enriched copper.
pfdietz
I understand isotopically pure Si-28 may be preferred for quantum computing devices. The Si-28 has no spin or magnetic moment, reducing the rate of decoherence of certain implementations of qubits.
https://spectrum.ieee.org/silicon-quantum-computing-purified...
ksec
With AI, both GPU and CPU are pushed to the absolute limit and we shall be putting 750W to 1000W per unit with liquid cooling in Datacenter within next 5 - 8 years.
I wonder if we can actually use those heat for something useful.
itishappy
It's going to be too low temperature for power production, but district heating should be viable!
formerly_proven
The mainstream data center GPUs are already at 700 W and Blackwell sits at ~1 kW.
esseph
We are looking at 600kW per rack, and liquid cooling is already deployed in many places.
AnimalMuppet
So, one power plant per aisle of a data center?
datadrivenangel
Attempts to use the waste heat for anything in a data center are likely very counter productive to actually cooling the chips
null
null
FirmwareBurner
Pentium 4, GeForce FX 5800, PS3, Xbox 360, Nintendo Wii, MacBook 20??-2019: "First time?"
the__alchemist
This checks out. If y'all haven't specced a modern PC: Coolers for GPU and CPU are huge, watercooling is now officially recommended for new CPUs, and cases are ventilated on all sides. Disk bays are moved out of the main chamber to improve airflow. Fans everywhere. Front panels surface areas are completely covered in fans.
arcanemachiner
> watercooling is now officially recommended for new CPUs
First I'm hearing of this. Last I checked, air coolers had basically reached parity with any lower-end water cooled setup.
giantg2
I built a PC last year and saw a bunch of the CPUs were recommending water cooling. There were a few high end air coolers that were compatible. I went with an AIO water cooler. It was cheap and easy. It should give as good or better temperature control as the air coolers that are 5x more expensive.
My guess is manufacturers don't want to tell people they should air cool if it requires listing specific models. It's easy to just say they recommend water cooling since basically all water coolers will provide adequate performance.
m463
The noctua cpu fans are quieter and as good as liquid cooling because of the pump.
That said, I think liquid cooling has reached critical mass. AIOs are commonplace.
I think it would be (uh) cool to have a extra huge external reservoir and fan (think motorcycle or car radiator plus maybe a tank) that could be nearly silent and cool the cpu and gpu.
the__alchemist
I was surprised too, but that's from the AMD label!
xattt
You are behind the times. The latest and fastest PowerMac that Apple released so far* is water-cooled.
*Technically the truth
tempodox
Will there be an official “cleared for frying eggs” badge? We'll have to do something with all that heat.
TMWNN
mfw you forget AMD Thunderbird
Sometimes the solution is worse than the problem. My favorite example is the TRS-80 Model II and its descendants, with the combination of the fan and disk drives so loud that users experience physical discomfort. <https://archive.org/details/80-microcomputing-magazine-1983-...>
FirmwareBurner
Modern computers should come with built in piezo, haptic and rumble motors that can emulate HDD, FDD and CD-ROM sounds whenever you start a game or app. Change my mind.
- Inner voice: "You don't miss the old PC noises, you just miss those times".
- Shut up!
TMWNN
But this only simulates keyboard and mouse click sounds. In any case, you wrote "whenever you start a game or app" (my emphasis). The Model II's fan and drive noises are 100% present from start to finish, with the combination enough to drive users insane (or, at least, not want to use the $5-10,000 computer).
bitwize
The Model II was a loud beast. Its floppy drive drew directly from mains power, not a DC rail off the power supply, and spun all the time. The heads engaged via a solenoid that was so powerful it made a loud "thunk" sound and actually changed the size of the display on the built-in CRT.
The Model 12 and 16 improved on the design, sporting Tandon "Thinline" 8" drives that ran on DC and spun down when not in use, leaving fan noise that was quite tolerable.
gkhartman
Hardest to cool cpu I've ever owned was an AMD Athlon 3200+. I remember moving to P4, and life got a lot easier. It still ran very hot, but it could do so without frequent crashing. This was before the giant coolers that we have today were common place. I was far too afraid of water cooling back then.
rayiner
The most power hungry P4 didn’t top 115W.
adrian_b
The 90 nm Prescott Pentium 4 was much more power hungry than the previous 130 nm Northwood Pentium 4.
Even worse than the TDP was the fact that the 90 nm Pentium 4 had huge leakage current, so its idle power consumption was about half of the maximum power consumption, e.g. in the range 50 to 60 W for the CPU alone.
Moreover, at that time (2004) the cooler makers were not prepared for such a jump in the idle power consumption and maximum power consumption, so the only coolers available for Pentium 4 were extremely noisy when used with 90 nm Pentium 4 CPUs.
I remember when at the company where I worked, where we had a great number of older Pentium 4 CPUs, which were acceptable, we got a few upgrades with new Prescott Pentium 4. The noise, even when the computers were completely idle, was tremendous. We could not stand it, so we have returned the computers to the vendor.
42lux
The die was much smaller…
formerly_proven
Die size: 135mm²
A current AMD CCD is ~70mm² and can drop around 120 W or so on that area. E.g. the 9700X has one CCD and up to a 142W PPT, 20 W goes to the IOD, ~120 into the CCD.
edit: (1) this account/IP-range is limited to a handful of comments per day so I cannot reply directly, having exhausted my allotment of HN comments for today (2) I do not understand what you take offense at, because I did not "change [my] original argument" - you claimed, a P4 die is much smaller, I gave a counter example, and made the example more specific in response to your comment (by adding the "E.g. ..." bit with an example of a SKU and how the power would approximately split up).
FirmwareBurner
Which was huge in the era when CPUs didn't underclock themselves at idle to save power and coolers looked like this: https://www.newegg.com/cooler-master-air-cooler-series-a73/p...
Some coolers today still look like that but they're on chips drawing 35W or so while idling at <2W.
pcwalton
I mean, if what you want is P4-class performance, the modern semiconductor industry is excellent at delivering that with low TDP. An Apple A18 Pro [1] gives you over 7x the single thread performance of a Pentium 4 Extreme Edition [2] at 8 W TDP, compared to 115 W for the latter.
[1]: https://www.cpubenchmark.net/cpu.php?cpu=Apple+A18+Pro&id=62...
[2]: https://www.cpubenchmark.net/cpu.php?cpu=Intel+Pentium+4+3.7...
Havoc
Is there a reason we can’t put heat pipes directly into chips? Or underneath
amelius
Speaking of dissipation, how is the progress in reversible computing going?
onewheeltom
Isn’t heat just wasted energy?
> In a frontside design, the silicon substrate can be as thick as 750 micrometers. Because silicon conducts heat well, this relatively bulky layer helps control hot spots by spreading heat from the transistors laterally. Adding backside technologies, however, requires thinning the substrate to about 1 mm to provide access to the transistors from the back.
This is a typo here, right? 1mm is thicker, not thinner, than 750 micrometers. I assume 1µm was meant?