Written by Herb Villa, Snr. Applications Engineer, Rittal North America LLC
Downtime is the ultimate enemy for data centres. One frightening weapon used by this enemy is heat. As the demands in data centres grow, the instances of heat taking down a smooth-running facility only continue to increase. In other words, the heat is on.
The battle continues as various IT components consume more power, with CPU thermal design power (the maximum heat a computer chip can use) expected to push 400 watts soon. As memory demands grow (terabytes per server) and modern IT workloads skyrocket, cooling server hardware using air alone is impossible. Thus liquid cooling.
Liquid cooling is nothing new. Many IT professionals in traditional data centres have implemented it in some form for at least a decade. So, why is it becoming such a favourable option now? And at the risk of opening an even greater debate just what exactly are we talking about when we use the very broad term Liquid Cooling?
The conversation continued below uses the more commonly (and recently) accepted liquid cooling definition of heat removal, using a liquid medium, at a chip, chassis or component level. This updated definition does not include row/room-based solutions that use air/water or air/refrigerant in row solutions. Even though they use a `liquid', they are separate systems, and separate conversations.
Being 2+ years into a global pandemic has created thenew normal. And similar to any industry, IT professionals have pivoted, learned and adapted to different business trends.
Trends, including within the data centre space, force change in one way or another. What is clear at this point? Traditional methods (removing heat by mixing cold air with hot air to reach an appropriate temperature) are becoming extinct.
With the clarity based on real-world experience: air-based cooling infrastructure has numerous problems rising energy costs, high maintenance costs, space constraints, and hardware being vulnerable to pollutants. Most importantly of all, the fundamental limitations of air-based cooling are not able to keep pace with high-density data centre demands for three specific reasons:
With all the benefits being so obvious, why isn't liquid cooling everywhere by now? Well, there are disadvantages to any system, and liquid cooling is no different.
Adapting to a new system is a big one. Many enterprise and commercial users are in data centres built in the past 10-15 years, and they cannot simply drop current cooling methods and instantly modify them for liquid cooling systems. It is obvious that from a sustainability, performance (CPUs, GPUs, etc.), and total cost of ownership outlook that liquid cooling infrastructure is the wise choice. It's simply difficult to scale-out this new technology.
Together, fear/unfamiliarity is another barrier. Will the fear of a leak bringing down an entire rack (or several racks) scare off otherwise highly experienced IT professionals? Are plumbers now going to play a significant role in data centre design and planning? Do non-water liquids within some liquid cooling system evaporate rather than damage components?
Questions and concerns abound, making adaptation over time the key. Changing to liquid cooling throughout an entire data centre will likely take time, possibly years. Existing data centres must retrofit to liquid cooling while new facilities can immediately specify liquid cooling solutions for high demand data centres.
Take a look at this resource for advice specifically for colocation centres.
Speaking of retrofitting to liquid cooling, it is possible to use a hybrid system with fans spinning air serving to assist liquid cooling (doing the majority of the cooling). Not only does that serve as a backup in case of failure, it can also provide peace of mind as data centre facility managers begin to adapt liquid cooling technology.
The term liquid cooling covers multiple methods of heat removal and climate control. In all, either chilled water or refrigerant is the primary heat removal medium at the component level. So, what are the options for full liquid cooling systems?
Immersion cooling uses vats of dielectric fluid, either in single-phase or two-phase immersion, for cooling equipment. The advantage is that the system is built from the ground up as part of the data centre's design. There is no adapting to a legacy system, so there is added flexibility and increased power usage effectiveness (PUE).
Direct to chip cooling (also called cold plate-based cooling) is a more involved system moving liquids to and from the cold plate yet it can be retrofitted into existing data centre infrastructure.
The closed loop cooling version of direct to chip is completely built into the rack and sealed. No foundational infrastructure is changed, so the footprint remains the same, and any possible leaks are self-contained in that rack.
In an open loop cooling system, a cooling distribution unit (CDU) pumps cold liquid into the server and hot liquid out, recycling the liquid between steps.
The reasons a data centre chooses liquid cooling varies depending on the user's goals.
Whatever the ultimate reasons for using liquid cooling, the results are impressive. A reduction in power use up to 40%1 even takes into account that liquid cooling also requires its own motors, pumps, electronics, etc. That is a significant move toward meeting sustainability targets and handling today's high density demands.
Each situation presents unique cooling challenges. Take a look at this resource and discover more insights & keys to tackling the complexity of IT equipment cooling at the Edge.
SOURCE: 1Data Center Dynamics, Liquid cooling can cut power costs by 40 percent, states Iceotope CEO David Craig, May 4, 2021
Learn more about our Liquid Cooling here
You might also like: Edge computing examples - 3 efficient real world edge installations