Don’t Make a Data Center Cooling Decision Until You Read This
What causes data center professionals to lose more sleep – downtime or energy costs? It’s both, really: they’re kept awake by the fear of heat-related equipment failure and the money they’re spending to avoid it.
Extending the heat-removal capacities available with liquid-based, high-density cooling systems has given end users the opportunity to not only increase installation densities but ensure peak performance while seeing substantial cost savings, solving both their problems at once. And these benefits will be realized at all installation levels of the IT space: the individual footprint, the whole row and the whole room. High density cooling can provide a normalization of load across the entire space, eliminating hotspots and thermal unbalances.
When evaluating a site for potential deployment of a liquid cooling system, four issues related to the state of the data center need to be addressed:
- Installation densities (current and projected)
- Accurate information on IT component utilization
- Climate control systems and their capacities
- Installation flexibility
Let’s take a look at each of these in a bit more detail.
Data centers, data rooms, and Edge deployments are all designed and specified to support a projected load rating, typically measured in watts-per-square-foot (W/SqFt) for the entire space. However, this can be misleading and lead to improperly sized power and climate control systems.
For example, a specification may call for a 150W/SqFt facility rating for the entire IT space, but the load is concentrated in a significantly smaller area – the footprints occupied by open racks or enclosures. A typical deployment of IT equipment with a heat load of 5kW in a standard rack mount server cabinet (24” wide x 48” deep) will yield a load rating of 625W/SqFt, significantly higher than the room rating. These numbers only increase as deployment densities rise (8kW per footprint yields a 1kW/SqFt load, and 20kW yields a 2500W/SqFt. load). As you can see, it is critical that the correct footprint rating is defined, and that the climate control solution is planned accordingly.
Another issue to be aware of is the actual server/process utilization of IT components and its effect on installation densities. Data center operators know that relying on equipment nameplate data is a mistake; instead, they rely on actual operational and performance measurements. With this information available, sites can fine-tune the climate control systems to meet actual demands, rather than unreachable maximum loads. However, these loads are variable and must be closely monitored to optimize overall system performance. These loads will also have a direct impact on heat generation and climate control capacities.
Climate Control Systems
To date, the primary method for removing heat in the IT space remains air-based – the traditional Cold Aisle/Hot Aisle orientation. However, even with expanded thermal ranges, ambient air-based systems have limited heat-removal capacities. These installations are best deployed for densities less than 8kW per footprint; containment solutions (enclosure or aisle) can increase this to approximately 15-18 kW per footprint. It may be possible to realize slight improvements in these capacities with very tight installation and operational standards and procedures, but there will still be a lower limit with air-cooled systems.
High-density liquid cooled solutions significantly increase these capacities. Heat loads in excess of 50kW per footprint can be supported with the deployment of closed loop, close-coupled liquid cooling solutions. Even with air being used as the primary coolant at the component level, incorporating a liquid cooled heat exchanger will provide much greater overall capacities.
Perhaps the most critical factor when it comes to data center deployment is installation flexibility. The facility and all its supporting infrastructure must support maximum IT component installation flexibility. At the row level, open equipment racks or enclosures must be capable of supporting the installation and operation of these products, loosely divided into three categories:
- Termination – Cables, connectivity and cable management
- Network – Switching and routing, with associated cable and power distribution
- Server – Data processing, data storage, ancillary hardware, cable and power distribution
Termination and network components can be deployed in open racks or frames, or enclosed cabinets. Termination components typically have no to very low (<1kW) installation heat densities. Network equipment, being powered, do have higher loads, but still on the order of 3-5 kW per footprint. This leads to the major contributor to data center heat loads: Server and storage products with low-density installations of 5-8 kW per footprint up to heat load densities in excess of 25-30 kW per footprint. The majority of footprints will support server and storage products, but these deployments will not remain static. An initial installation may consist of 1 or 2RU rack mount servers to support volume processes, but future deployments may utilize rack mount blade servers and similar enterprise class products with proportionate increases in heat loads, and the installed infrastructure needs to be flexible enough to support those requirements.
A further flexibility advantage: The whole space does not have to be configured for high density deployments or liquid cooling solutions. You can partition the space: still use ambient air solutions in low density zones and liquid cooling for higher densities (maybe one or two rows of higher densities or a separate area in a larger white space).
Another option is to just use liquid cooling solutions for ALL the climate control – open loop systems to provide heat removal for lower density footprints and even comfort cooling, and closed loop/immersion/direct to chip for the higher density systems. The flexibility and adaptability to figure out what is the best climate control deployment.
In today’s data center, air-based cooling doesn’t have the capacity to protect equipment from the effects of overheating and moisture, and the cost of moving massive amounts of air through the facility – an inefficient method, to be sure – is significant. If you’re considering liquid cooling to control the climate in your data center, talk with a Rittal expert. Our range of LCP (liquid cooling package) systems provide the necessary flexible capacity to support potential higher heat loads over the life of the data center.
To learn more about the advantages of liquid cooling and its potential application in your data center, download our Data Center Cooling: 4 Effective Types of Liquid Cooling.
-Herb Villa, Rittal Sr. Applications Engineer