ARTICLE
New Data Center Guidelines Impact High Performance Computing
By Todd Boucher, LEDG Founder & Principal

Earlier this year, ASHRAE’s Technical Committee 9.9 (“TC9.9”) published the fifth edition of the Thermal Guidelines for Data Processing Environments. Widely recognized as the standard for identifying environmental conditions requirements associated with data centers, the guidelines published by TC9.9 are developed by collecting input from a diverse cross-section of contributors from both the IT equipment vendor community and the data center industry.

Historically, the updates to TC9.9 Thermal Guidelines have relaxed the operating envelope standards for data centers, enabling owners to leverage higher inlet temperatures in their data center and maximize energy efficiency as a result. For example, ASHRAE’s extension of the recommended operating range for class A1 equipment in 2011 extended the top end of the recommended operating dry bulb temperature of a data center to 80.6°F, with allowable ranges that stretched beyond that (note that the temperatures indicated are measured at the server inlet).

Image 1: ASHRAE Thermal Guidelines for Datacom Equipment (Fourth Edition)

Somewhat surprisingly, in the recent fifth edition of the guidelines, ASHRAE introduced a new classification of equipment – H1 – and tightened the recommended operating envelope for this equipment as compared to class A1. The new H1 class includes systems that “tightly integrate a number of high-powered components” like server processors, accelerators, memory chips, and networking controllers. Based on the recommendations in the fifth edition, ASHRAE is indicating that these H1 systems need more narrow temperature bands when air-cooled strategies are used, recommended 64.4°F – 71.6°F (as compared to 64.4°F – 80.6°F for class A1. See image 2 below.

Image 2: Recommended and Allowable Envelopes for ASHRAE Class H1

Many data center operators at research-based institutions already struggle with the heterogeneity of their data center – the high density nature of their research and high-performance computing (HPC) systems differs drastically from the profile of their enterprise computing systems. For any operators of HPC systems, this change in ASHRAE’s recommendations is important, as several factors will need to be considered:

Image 3: Air vs. Liquid Cooling Transitions (source: Vertiv, ASHRAE)

For owners today that are utilizing high-performance computing, ASHRAE’s change to include the H1 classification is significant. Failure to operate within the recommended envelope for the appropriate equipment classification in your data center can lead to warranty issues and shortened equipment lifecycles. Owners should begin by evaluating their current data center, assessing whether they have the capacity to support H1 recommended temperatures (without changing inlet temperatures for the remainder of the data center), and developing a plan for supporting this narrower operating range in the near term.

Questions on high-performance computing? Email us at info@ledesigngroup.com.

GET IN TOUCH
Fields marked with an asterisk (*) are required.
Thank you for getting in touch. We will get back to you as soon as we can.
Oops! Something went wrong while submitting the form.