Skip to content(if available)orjump to list(if available)

Data Centers, Temperature, and Power

Python3267

This article was written for non-technical folks unfortunately. I read the phrase below and nearly puked from the corpo speech.

> So, the methodology around temperature mitigation always starts at power reduction—which means that growth, IT efficiencies, right-sizing for your capacity...

jakedata

I have had high hopes for passive daytime radiative cooling since I read about it 10 years ago. Converting waste heat to an infrared wavelength that flies off into space day or night is apparently not that easy or cost effective right now.

https://www.asme.org/topics-resources/content/new-solar-ener...

https://www.skycoolsystems.com/

https://www.nature.com/articles/s41377-023-01119-0

jeffbee

Seems to be a mental mishmash. For one thing, they are taking it as given that temperature is relevant to device lifetime, but Google's FAST 2007 paper said "higher temperatures are not associated with higher failure rates".

Second weird thing is that it says cooling accounts for 40% of data center power usage, but this comes right after discussing PUE without contextualizing PUE with concrete numbers. State-of-the-art PUE is below 1.1. The article then links to a pretty flimsy source that actually says server loads are 40% ... this implies a PUE of 2.5. That could be true for global IT loads including small commercial server rooms, but it hardly seems relevant when discussing new builds of large facilities.

Finally, it's irritating when these articles are grounded in equivalents of American homes. The fact is that a home just doesn't use a lot of energy, so it's a silly unit of measure. These figures should be based on something that actually uses energy, like cars or aircraft or something.

dijit

> Seems to be a mental mishmash. For one thing, they are taking it as given that temperature is relevant to device lifetime, but Google's FAST 2007 paper said "higher temperatures are not associated with higher failure rates".

Google have been wrong a couple of times, and this is one area where I think what they've said (18 years ago btw) might have had some time to meet the rubber of reality a bit more.

Google also famously chose to disavow ECC as mandatory[0] but then quietly changed course[1].

In fact, even within the field of memory: higher temperatures cause more errors[2], and voltage leaking is more common at higher temperatures within dense lithographic electronics (memory controllers, CPUs)[3].

Regardless: thermal expansion and contraction will cause degradation of basically any material that I can think of, so if you can utilise the machines 100% consistently and maintain a solid temperature then maybe the hardware doesn't age as aggressively as our desktop PCs that play games- assuming there's no voltage leaking going on to crash things.

[0]: https://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf

[1]: https://news.ycombinator.com/item?id=14206811

[2]: https://dramsec.ethz.ch/papers/mathur-dramsec22.pdf

[3]: https://www.researchgate.net/publication/271300947_Analysis_...

jeffbee

I am not taking Google's result at face value, but the article shouldn't make assumptions without supporting evidence, either. ASHRAE used to say your datacenter should be 20º-25º which you know makes a certain amount of sense when it comes from an organization earning its money from installing and repairing CRACs. Now they admit that 18º-27º is common and they allow for up to 45º ambient designs. They are following the industry up.

null

[deleted]

used_slot

[flagged]

leConbineatort

Cannot browse website from france !!!??

remram

Works for me (from France)