As institutions in higher education, healthcare, government, and other industries experience rapid growth in demand for high-performance computing (HPC), leaders and decision-makers must consider how they will plan and use resources to adapt. Leading Edge Design Group (LEDG) Founder, Todd Boucher, joined me to discuss these industry trends and his insights as a sought-after HPC advisor.
What is driving organizations’ efforts to expand their HPC capabilities?
HPC is growing rapidly across several industries, and IT leaders recognize the need to form specific strategies to accommodate and scale their HPC infrastructure, often within the constraints of an existing data center. Here are three key drivers that Todd believes are shaping the industry.
1. Computation in the Research Process
Traditionally, we have considered experiment and theory to be the foundational elements of research. However, computation has become what Boucher refers to as the 3rd Pillar of Research Science. So many research-based institutions rely on HPC to compute vast amounts of data at scale that the global HPC market size expects to reach $66.5B in 2028 and register a CAGR of 6.3% throughout the forecast period, according to Emergence Research. If research departments are to remain competitive, they must find solutions to increase computing capacity.
2. AI & Machine Learning
Regarding rapidly evolving technological capabilities, industries are increasingly utilizing Artificial Intelligence (AI) programs to improve research operations at scale and bolster efficiency. HPC allows institutions to leverage these software tools to their maximum potential by enabling the simultaneous analysis of terabytes of information. However, the full potential of HPC-powered AI applications has yet to be realized. According to a 2021 study, there is no accurate framework to narrow down the ideal set of hyper-parameters to guarantee rapid convergence and optimal performance levels of AI models as the number of processor or GPU nodes increases to speed up the training stage. (Cortez et al.) The strong experimental component of the Deep Learning model’s training on HPC platforms has escalated competition, thus enticing institutions in Higher Education and Healthcare to invest in developing their capacity.
Members of Emory University’s School of Medicine are using AI learning methods to improve outcomes for people with cancer and other fatal diseases. Other complex issues being tackled with AI technology include racial health disparities and global wealth disparities. It is essential to consider ethics while using AI tools, explains Paul Root Wolpe, Director of Emory’s Center for Ethics. Every decision the AI programs learn to make is based on either directly or indirectly programmed values. To ensure that institutions use AI for “good”, teams must consider the values involved in AI decisions and how those values should be programmed.
3. Funding Structure
“Funding for research computing is most often, in my experience, a totally different funding source than a typical capital project on a university campus."
Data Lab, whose mission is to promote transparency of government finances by data-driven analyses of federal spending data, reports that in 2018, a total of $28,716,550,516 in research grants were awarded to universities. These grants are distributed from almost all agencies and sub-agencies in the federal government. In addition, universities are also increasing their attainment of grants from private sources. Citing one example, Stanford University has more than 7,900 externally sponsored projects, with the total budget for sponsored projects at $1.69 billion for 2020-2021, including the SLAC National Accelerator Laboratory (SLAC). The federal government sponsors approximately 79 percent of these projects, including SLAC.
The funding structure for grants has also shifted. Grant proposals typically focused heavily on having the competitive talent needed to conduct the research, with the university providing the infrastructure. Today, proposals may or should include the requirements and support for high-performance computing. However, depending on the competition, investments in facilities and technologies may still fall on the university, and the possibility of using grant funds to secure talent vs. equipment is a big win. Having the right high-performance computing infrastructure enables universities to compete at scale and with greater flexibility.
Is There a Wrong Way to Approach Improvement?
Boucher confidently stated without hesitation that there is a wrong way to improve; by being Reactive.
The trends within the research computing industry are clear. Leaders must plan for the long term to develop a sustainable system that can incrementally adapt to achieve capacity demands as they grow. By reactively implementing solutions in the short term, institutions may experience complex logistical challenges down the line, including budget constraints, real estate constraints, and infrastructure upgrades.
Industry leaders should be proactive, considering how to scale for rising demands, required space and infrastructure to expand facilities and proper equipment to maximize efficiency.
How Can Organizations Effectively Use the Upcoming Quarter to Jumpstart HPC Improvements?
During the summer months, research computing demands slow down for many organizations. This lull allows opportunity for organizations to address issues and create plans without extensively disrupting research efforts. The focus should be directed toward aligning physical infrastructure with the technology profile to create harmony.
“The institutions that effectively improve HPC are deliberately planning for the future.”
For more information on HPC, look for the launch of our High-Performance Research Computing report in September. We are gathering insights from HPC committee members, data center strategists, research computing center leaders, facility managers, and others to share with respondents. Questions? Email Todd Boucher at email@example.com