HPC and AI: A Powerful Combination

Machine learning and deep learning augment HPC to achieve new insights faster.

More Data and Deeper Analysis with HPC AI

  • AI implementations have an affinity to the compute architecture of HPC, and both AI and HPC benefit from similar configurations based on high-performance Intel® hardware.

  • Researchers at CERN are using Intel-enabled convolutional neural networks that integrate the laws of physics into AI models to drive more accurate results for real-world use cases.

author-image

Por

AI-Augmented HPC

The architecture required for HPC implementations has many similarities with AI implementations. Both use high levels of compute and storage, large memory capacity and bandwidth, and high-bandwidth fabrics to achieve results, typically by processing massive data sets of increasing size. Deep learning is a great match for problems addressed by HPC that involve very large, multidimensional data sets. For example, Quantifi used Intel-enabled AI to accelerate derivative valuation in financial markets by a factor of 700x over conventional methods,1 providing near-real-time results for common valuation workloads.

The promise of AI in HPC is that AI models can augment expert analysis of data sets to produce results faster at the same level of accuracy. Key HPC use cases are benefiting from advanced AI capabilities, including:

  • Analytics for financial services (FSI) such as risk and fraud detection, logistics, and manufacturing.
  • Industrial product design, computational fluid dynamics (CFD), computer-aided engineering (CAE), and computer-aided design (CAD).
  • Scientific visualization and simulation, especially in fields such as high-energy physics.
  • Pattern clustering, life sciences, genomic sequencing, and medical research.
  • Earth sciences and energy sector exploration.
  • Weather, meteorology, and climate science.
  • Astronomy and astrophysics.

How Workloads Have Changed

Many of the current use cases for AI are limited to edge or data center deployments, such as intelligent traffic systems that lean heavily on smart cameras for AI object recognition. The algorithms underpinning AI models have become far more complex, offering greater potential but also greater computational requirements for scientific discovery, innovation, and industrial and business applications. The challenge is how to scale up AI inference to HPC levels or how to go from recognizing traffic patterns at an intersection to sequencing a genome in hours instead of weeks.

Fortunately, the HPC community offers decades of experience on how to address the challenges of AI at scale, such as the need for more parallelism, fast I/O for massive data sets, and efficient navigation of distributed computing environments. HPC capabilities such as these can help accelerate AI to achieve useful results, such as applying expert-level heuristics via deep learning inference to thousands of transactions, workloads, or simulations per second.

Physics-Informed Neural Networks (PINNs)

One example of an AI-augmented HPC use case is the integration of the laws of physics into inferencing models to generate more realistic outputs. In these applications, neural networks must obey known laws such as the conservation of mass, energy, and velocity and are referred to as physics-informed neural networks (PINNs). PINNs can be used to augment, or replace, HPC modeling and simulation for use cases like fluid flow analysis, molecular dynamics, airfoil and jet engine design, and high-energy physics.

For example, CERN researchers used Intel® Deep Learning Boost (Intel® DL Boost) on Intel® Xeon® Scalable processors to replace Monte Carlo simulations for particle collisions. Low-precision int8 quantization helped deliver up to 68,000x faster processing than software simulations,2 with a slight accuracy improvement as well.

AI in HPC Is Driven by Data Growth

The main driver for HPC and AI workloads is the persistent growth of data and the need to match pace with HPC-scale analysis. AI algorithms are increasing in sophistication and can handle much larger data sets than in previous years, especially since the introduction of deep learning methodologies. Disciplines like genomics sequencing are generating a staggering amount of data, and institutions like the Broad Institute of MIT and Harvard are creating about 24 terabytes of new data each day.

The San Diego Supercomputer Center (SDSC) hosts one of the largest academic data centers in the world and is recognized as an international leader in data use, management, storage, and preservation. The AI-focused system allows scientists to develop new approaches for accelerated training and inferencing. Case study: SDSC builds AI-Focused “Voyager” Supercomputer.

Overcoming Challenges to AI in HPC Adoption

When it comes to HPC configurations for AI, traditionally there is a trade-off between AI and HPC requirements within the CPU architecture. Workloads that are AI heavy will typically exchange core count for speed, while HPC workloads often prefer greater compute performance with a high core count and more core-to-core bandwidth. With continued generational improvements, Intel is offering solutions including built-in acceleration in Intel® Xeon® Scalable processors.

The following key innovations in both the hardware and software layer are making it easier to design and build AI solutions:

  • Intel® Xeon® Scalable processors deliver requisite high tiers of AI performance with AI acceleration built in. Intel® AVX-512 with Intel® DL Boost Vector Neural Network Instructions (VNNI), exclusive to Intel® processors, delivers optimized AI performance for fast insights in less time.
  • Low-precision optimization libraries within the Intel® oneAPI AI analytics toolkit are making it easier to code for HPC and AI platforms, while increasing performance and maintaining accuracy thresholds.
  • Intel® FPGAs for machine learning support high parallelization and help accelerate time to results and insights for HPC and AI workloads.
  • The Gaudi platform from Habana Labs, Intel’s data center team focused on deep learning processor technologies, enables data scientists and machine learning engineers to accelerate training and build new or migrate existing models with just a few lines of code to enjoy greater productivity, as well as lower operational costs. Habana accelerators are purpose-built for AI model training and inference at scale.
  • AI developers are refining their techniques and code to run more effectively on HPC clusters. New optimizations are accelerating workloads end to end, from data loading to preprocessing, training, and inference.

Complexity is also a major source of friction for HPC and AI adoption. The skill sets required are very domain specific and businesses will need to acquire talent that’s trained in HPC and AI to succeed. Intel industry leadership can help pave the way, as Intel collaborates closely with both HPC and AI communities to share expertise and ideas.

Conclusion: Bringing AI Intelligence to HPC

AI is increasingly being infused into HPC applications with new technologies and methodologies increasing the pace and scale of AI analysis for fast discovery and insights. With these innovations, data scientists and researchers can rely on AI to process more data, create more-realistic simulations, and make more accurate predictions, often in less time.