Why Should You Care about Server Delta-T?

Welcome to Keep Your Cool - a series tackling simple cooling optimization strategies for the busy data center operators by former busy data center operator, Gregg Haley.

It goes without saying that data centers are designed to house computing and storage devices in a safe, secure, and environmentally sound space; with strictly controlled temperature ranges for optimum performance. But, have all data centers been built to the same model? Of course they have not, some have been organically grown into what they are today. Thus there is a wide range of operating conditions found in these spaces due to the varying ages of their designs and infrastructure equipment. Therefore, many of the newer designs promote greater efficiency than some of the legacy designs. These newer designs generally result in higher Delta - T temperatures recorded across the server.

What is the Server Delta-T? It is the temperature rise of the cooling air measured at the server inlet and server exhaust. That rise in temperature is a measurement of how much heat has been removed from the server. The temperature differential is influenced by the speed in which the air is drawn through the server. A lower fan speed will allow more heat to transfer, or be absorbed by the cooling air, resulting in a higher Delta-T. A high fan speed will result in a lower Delta-T as the volume of cold air passing through the server does not register the same opportunity to absorb heat. Think of it as turning on the hot water faucet, and as you open the cold water faucet - the temperature drops. The more you open the cold water faucet the lower the resulting temperature of the water. Similarly pushing a higher volume of air through the server is not efficient, as the fan is using more power than necessary to remove the heat, and the air passing through at a high volume absorbs the heat, but is inefficient.

Server Delta-T is an important metric to monitor and maintain because it can impact the reliability and performance of the servers and the overall energy efficiency of the data center. It is generally accepted that Server Delta-T should be in the 10 to 20 degree Fahrenheit range. The higher the Delta -T the more efficient the cooling.

So what should one look for in a Delta-T reading:

  • First the server inlet temperature should be within the ASHRAE guidelines of 64.4 and 80.6 degrees Fahrenheit. 

  • The  internal logic of the server should control the fan speed, which in turn controls control the airflow. Today’s servers have the intelligence to vary the fan speed depending upon the internal temperatures it monitors. 

  • The exhaust port should not have any wiring blocking the exhaust airflow. It always amazes me to see poor cabling management creating air dams at the rear of server cabinets. It can severely restrict the airflow through the server which can raise the internal server temperature causing it to speed up the fan and consume more energy. That increased internal server temperature can shorten the life of the components within the server chassis.

  • Blanking plates are installed above and below the server inlet so that hot air is not drawn into the server inlet.

The highest Delta-T readings I have experienced were in a collocation where the hot and cold aisle were 100% isolated, with floor to ceiling barriers between the hot and cold aisles, 100% blanking plates deployed, being a slab design - cold air dropped from ceiling mounted CRAHs, with the hot air return vented from the hot aisle. The Hot and Cold aisle each had a secure, private door access point. The room, at 100 KW of IT load, operated at 1.34-1.38 PUE year round. This was a very effective design that was well executed. Server Delta-T readings were in the 24.0 degree Fahrenheit range. 

The lowest Delta- T readings I have experienced in data centers, where extremely low averaging between 2.0 degrees to as low as .5 degrees Fahrenheit. Containment had not been deployed, nor blanking plates effectively  deployed. There was more mixing of the hot and cold air which is going to influence the Delta-T readings. This in turn results in more inefficiencies in the cooling, resulting in higher PUEs - thus greater expense. 

We, at Purkay Labs,  have been in multiple Data Centers where the delta-T has been 10 degrees or below, sometimes even 1 or 2 degrees. We do not need to take other measurements to know that the PUE will be over 1.5 or may be even 2.0. Why ? A low Delta-T indicates a lot more extra cooling airflow is being supplied to the server. Excess cooling indicates high PUE because of the extra energy spent on the unnecessary cooling. There could also be significant Bypass air into the hot aisle.

Delta-T is one of the most valuable metrics in the quest of lower PUE, lower Energy consumption and lowering the Carbon Footprint. Sadly, not all Data Center Managers are aware of this relationship. This is the next best parameter to manage after ensuring the Cold Air inlet is within ASHRAE standards.

How do I measure Delta-T?

The best method of course is having sensors at the inlet and outlet of every cabinet at three different heights. This is the most prudent and most expensive approach because of the back office task of monitoring this parameter. A feasible alternative is using the Purkay Labs tool AUDIT-BUDDY and the Purkay Labs Assessment service; which provides a simple, inexpensive way of measuring Delta-T without affecting the operation. 

Once the bench mark of the Delta-T across the Data Hall is known, one can now implement measures that increase efficiency, as well as increasing the delta-T across the cabinets.

Why should you care about the server Delta-T?  Here are a few reasons:

  • It is an indicator of the cooling efficiency taking place in the data center.

  • A good or great Delta-T is generally more economical as the correct level of cooling is supplied to the server - not an excess of cooling.

  • An indication of less bypass or recirculated air.


What steps can you take to improve Delta-T?

  • Make sure perforated tile CFM aligns with the CFM requirements of the servers for a target Delta-T.  Earlier I cited an experience with a high Delta-T in a contained design of 100 Kw IT load. The Delta-T was about 24 degrees. Using the formula: CFM= (3.10 x Wattage)/Delta -T we can calculate the aisle’s CFM requirement. 

    • [CFM= (3.10 x 100,000)/24]

    • [CFM=310,000/24]

    • {CFM=12,917]

    • If a raised floor then the perforated tiles must be capable of passing 12,917 CFM of cooling air to the aisle. A higher volume would be wasteful. A lower volume could be detrimental to the server performance (overheating). 

  • Make sure best practices are deployed managing airflow: Blanking plates deployed, air blocking floor grommets deployed, rack side rails are sealed.

  • Containment, if an option, is deployed.

  • Doors on the ends of cold aisles are deployed.


In conclusion, a baseline assessment of the entire data hall should be performed to determine what areas require more immediate attention. Using the data collected, an improvement program should be developed to implement corrective actions. Once the items requiring attention are addressed, a follow up baseline assessment should be performed to document the improvements and the collected data studied to determine if additional actions are necessary. You can learn more here: https://www.purkaylabs.com/assessment-service

Purkay Labs offers Assessment Services to perform the baseline assessment for you. Our service includes an aisle by aisle, or cage by cage summary report, a viewer program where one can click on a specific aisle and view the data collected. Or you may study the static heat maps that depict the temperature stratifications across the face of the aisle. Another option is to rent or purchase the AUDIT-BUDDY System and perform the work yourself.

About the Author

Gregg Haley is a data center and telecommunications executive with more than 30 years of leadership experience. Most recently served as the Senior Director of Data Center Operations - Global for Limelight Networks. Gregg provides data center assessment and optimization reviews showing businesses how to reduce operating expenses by identifying energy conservation opportunities. Through infrastructure optimization energy expenses can be reduced by 10% to 30%.

In addition to Gregg's data center efforts, he has a certification from the Disaster Recovery Institute International (DRII) as Business Continuity Planner. In November of 2005, Gregg was a founding member and Treasurer of the Association of Contingency Planners - Greater Boston Chapter, a non-profit industry association dedicated to the promotion and education of Business Continuity Planning. Gregg had served on the chapter's Board of Directors for the first four years. Gregg is also a past member of the American Society of Industrial Security (ASIS).

Gregg currently serves as the Principal Consultant for Purkay Labs.



Previous
Previous

The Four Delta-Ts of the Data Center

Next
Next

2023 at Purkay Labs