LEDESIGNGROUP.COM
603.632.4507   |    Submit RFP      

Prestigious Mid-Atlantic University

Creates a Four-Tiered, High-Performance Computing Strategy That Allows Users to Choose the Best Option for Their Needs

Download PDF

A well-defined research computing strategy is an essential part of all university operations and thriving in a rapidly evolving technology environment, and increasingly collaborative culture requires a significant institutional commitment to High-Performance Computing (HPC). An HPC data center on a university campus provides substantial computational, storage, and networking resources and requires ongoing financial and operational investments to deliver the scalability and density that HPC workloads demand.

A prestigious Mid-Atlantic university recognized that their HPC needs were expanding and becoming more valuable to the institution. They were experiencing a wide diversity of density associated with high-performance computing, with rack loads varying from 5kW per rack to over 20kW per rack. They also recognized that HPC workloads are heterogeneous and will change by application and their goal was to accommodate the workload variances to provide greater flexibility to research and HPC clients.

As a result, the University with the help of Leading Edge Design Group developed a four-tiered HPC strategy to effectively accommodate their diverse research and HPC client base allowing clients to choose the environment best suited for their needs. With the new four-tiered plan and approach, new HPC deployments can be implemented efficiently and without compromising reliability to the data center environment.

SOLUTION

Existing Data Center
Rack Density 0-10kW

The University has segregated portions of their existing data center into higher density Point of Delivery modules or PoDs for HPC. A PoD is a set of defined compute, network, and storage resources and accommodates up to 10kW per rack of density with N+1 cooling redundancy. The PoD designs provide tighter integration and better standardization across the data center infrastructure and the containerized approach saves time, space, and costs allowing for flexibility and growth.

Colocation Facility
Rack Density 10-20kW 

For rack loads that exceed 10kW per rack, the University negotiates the option of leveraging a colocation facility. Based on the rack loads, a traditional colocation environment would not be feasible, and a dedicated suite or cage may be required. To accommodate the increase in rack density, the data center requires cooling strategies like in row cooling with containment or a rear door heat exchanger.  A colocation provider may have the capability of delivering this capacity quickly and cost-effectively. Network connection from the University’s campus currently exists to many colocation providers and additional connections could be added if required.

Containerized Data Center Solution
Rack Density > 20+ kW 

Based on the current state of high-performance computing installations on college and university campuses, rack loads exceeding 20kW per rack are not common but are increasing. HPC workloads have the capability to scale beyond 20kW easily, and the University is prepared to accommodate these workloads as required with a prefabricated data center solution.

Because prefabricated data centers take a modular approach to design and fabrication, they are inherently scalable and can create opportunities to add capacity as needed rapidly. An alternative to the traditional data center, a container can be placed anywhere including non-traditional data center locations. Each container design would be created to deliver the correct reliability and capacity to match the HPC workload and direct fiber connections to the University’s network would be available to eliminate latency concerns. Developing a prefabricated data center design requires close collaboration between the data center team and the research organization and the solution can be deployed quickly when necessary.

Off-Site University-Owned Locations
Rack Density < 20kW

Like option 1, off-site locations owned by the University are a possibility for data center build-outs for high-performance computing. Given the on-campus demand for space and the rising costs of their urban location, the University encourages data center users to consider sites outside of campus for their HPC needs. 

The offsite data centers will be committed to the same innovation in design principles, reliability and flexibility to support different rack density profiles. Similar to the architecture described in the existing data center, the University can deploy high-density PODs utilizing in row cooling or rear door heat exchangers to deliver rack densities close to 20kW per rack. Also, the University’s network is a robust asset that deploys to offsite owned locations, and the data center team can work with research organizations to validate the network’s ability to support HPC workloads with minimal latency.

The University’s combination of data center assets and flexibility offered in the four-tiered approach to support high-performance computing accommodates dense workloads in many different ways. They can successfully handle different research and HPC client needs quickly, proficiently, and without compromising reliability to the data center environment.