HLRN Brings Advanced Performance to HPC

HLRN chose Intel® Xeon® Platinum 9200 processors to meet their increasingly diverse needs for HPC workloads.

Executive Summary
HLRN supercomputers are used by over 100 universities and over 120 research institutions enabling exploration of the many frontiers of scientific research to help unlock a better future. The selection of Intel’s latest processor technology to power the newest HLRN supercomputer came after detailed testing to find the best solution. Prof. Dr. Ramin Yahyapour of Göttingen University explains “the expectation for HLRN’s supercomputer acquisition was to have a significant step up in computer power for new experiments”.

Challenge
“Science in general is getting more compute and data intensive. This means that having larger systems available translates into an ability for the scientists to do better work. That’s why HLRN is crucial for scientific research,” says Prof. Dr. Ramin Yahyapour.

HLRN lays claim to being a very demanding client—HLRN has substantial expertise from their prior deployments of three supercomputer systems. Prof. Alexander Reinefeld from Zuse Institute Berlin emphasizes that “We are expecting the highest performance for all benchmark applications. Our benchmark suite was carefully chosen so that each code challenges specific parts of the system: CPU, communication network, and parallel I/O. We are not looking for peak theoretical performance—we demand real system performance which makes it more complicated for vendors to optimize their infrastructure for our applications. That meant that our selection of the right processor and the right interconnect are all crucial for the overall performance”.

As with most research today, the need for more real-world computer capacity stems from the reality that simulations of many kinds are critical to the researchers. Faster computers are primarily used to increase the simulation in size and resolution—with the expectation of finding new discoveries.

“We demand real system performance… that meant that our selection of the right processor and the right interconnect are all crucial for the overall performance.” — Prof. Reinefeld

Solution
HLRN procured a new supercomputer with just under a quarter of a million cores. The Intel® Xeon® Platinum 9200 processors (from the 2nd Generation Intel® Xeon® Scalable processor family) will serve as the “right processors” for HLRN. For the “right interconnect,” HLRN chose Intel® Omni-Path Architecture (Intel® OPA). The system is produced by Atos (formerly Bull Computing) and will be physically split between the Zuse-Institute Berlin (ZIB) and the Georg-August-Universität Göttingen (University of Göttingen). These sites have previously used this split system model, and already have in place a dedicated, redundant, 10 gigabit, fiber optic cable spanning the more than 170 miles between Berlin and Göttingen.

Researchers at ZIB will use HLRN-IV for fluid dynamics, including developing turbulence models for aircraft wings.

Result
HLRN has announced that the new system, HLRN-IV, will be approximately six times as fast as the prior systems—offering 16 PetaFLOP/s performance.1 The excitement among researchers is palpable, and the list of research being done is mind-boggling. Prof. Reinefeld summed up his excitement saying, “It’s a great system. Our users will benefit right away from the more powerful system without needing to change their code. The homogeneous architecture of the 2nd Gen Intel® Xeon® Scalable processors will provide true performance portability, which is a crucial aspect for our researchers in order to quickly benefit from the new, more powerful system”.

Key research areas within HLRN include:

  • Earth System Sciences - Which includes work on climate change. Subjects include the dynamics of oceans, rain forests, glaciers, Antarctic phytoplankton (microalgae), mineral dust cycles, and the stratosphere.
  • Fluid Dynamics - Which includes turbulence models for ship turbines, wind turbines, and aircraft wings. These models are notorious for needing enormous compute power—the acquisition of HLRN-IV will enable the running of more fine-grained turbulent simulations on large systems such as wind flow through a city, or across a blade on a turbine. Modeling complete cities will allow studies in how new buildings would change wind flow, and other factors that impact various microclimates within the city. This may lead to new design aspects to enhance city life. Other researchers hope to gain understanding that will pave the way for future high-lift commercial aircraft. Other researchers are hoping to save lives and ships by studying liquefaction of solid bulk cargo (such as iron ore or nickel ore). Failure to properly deal with this issue has led to the complete loss of at least seven vessels around the world in the past decade.
  • Healthcare - Is a broad area of research, and HLRN researchers hope to help in many ways including improving medical care at home. Gaining a better understanding of illness and treatment of diseases stands to impact us all. Research includes simulations of drug efficacy, interactions, and side-effects. Enormous compute power allows leading researchers in these fields to start exploring the “personal medicine” aspects of these simulations, not just the average effects on a general population.

At the University of Göttingen, research areas include collaborative projects on cellular and molecular machines.

High Performance Across Diverse Research
In terms of science communities, HLRN has to support all types of workloads for their many researchers. Therefore, HLRN systems need to have the characteristics of a general purpose system but still be of the highest performance. Their final choice had no accelerators.

“Although we looked at accelerators, including GPUs, as part of the procurement process, there was no advantage with regards to obtaining the highest performance in using GPUs or other accelerators in the system.”— Dr. Thomas Steinke, Head of ZIB Supercomputing

HLRN’s benchmarks are open and include benchmarks that can take advantage of GPUs. HLRN found that any advantage in performance on some workloads are insufficient, when considering the reduction in general purpose compute capacity, or additional costs involved. A homogeneous system based on the 2nd Gen Intel® Xeon® Scalable processors proved itself to be the best choice for the diverse needs of the HLRN scientists and researchers.

Beating Back Amdahl’s Law
Ever mindful of Amdahl’s Law, Dr. Thomas Steinke is fond of emphasizing the use of fast algorithms for fast computers. He shared that “The pressure of optimizing code for scaling on a node is less because of the high real-world performance of the 2nd Gen Intel® Xeon® Scalable processors compared to previous many-core architectures”.

The 2nd Gen Intel® Xeon® Scalable processor family offers an outstanding choice for high performance computing (HPC) and helps programmers cope with Amdahl’s Law.

“Our users will benefit right away from the more powerful system without needing to change their code.”— Prof. Reinefeld

Future of AI in HPC
AI and Machine Learning stand to impact all areas of HLRN research. A hot area of interest is the blending of machine learning and AI techniques with traditional simulation capabilities. While promising results have been reported, there is much work to be done. The exploration of algorithms is likely to take researchers in many directions, and this need for flexibility is one reason HLRN chose 2nd Gen Intel® Xeon® Scalable processors to support their next generation of research.

Avoid Data Movement
Prof. Yahyapour emphasized that “the CPU is quite good for artificial intelligence and machine learning. That’s an area where we see more need from our researchers. Traditionally they were not so much into data intensive work but that’s something we see as a new trend for the new system that will also be of particular interest”.

Intel® Advanced Vector Extensions 512 (Intel® AVX-512) proved to be the logical choice to help increase HLRN’s compute power, and with the addition of Intel® Deep Learning Boost (Intel® DL Boost) to augment AVX-512, offered outstanding performance for the new frontier of HPC applications.

The ability to compute data where it is, for all types of algorithms, saves data movement. That represents a boost for compute capacity, and less wasted energy. A double win!

When exploring new algorithms, and new application techniques, nothing is more important than the flexibility of a system. The 2nd Gen Intel® Xeon® Scalable processor delivers high performance coupled with the flexibility needed to meet future challenges.

Explore Related Intel® Products

Intel® Xeon® Scalable Processors

Drive actionable insight, count on hardware-based security, and deploy dynamic service delivery with Intel® Xeon® Scalable processors.

Learn more

Intel® Deep Learning Boost (Intel® DL Boost)

Intel® Xeon® Scalable processors take embedded AI performance to the next level with Intel® Deep Learning Boost (Intel® DL Boost).

Learn more

Intel® Omni-Path Architecture (Intel® OPA)

Intel® Omni-Path Architecture (Intel® OPA) lowers system TCO while providing reliability, high performance, and extreme scalability.

Learn more

Avisos legales y descargos de responsabilidad

Las características y ventajas de las tecnologías Intel® dependen de la configuración del sistema y puede que requieran de la activación de hardware, software o servicios. El rendimiento variará en función de la configuración del sistema. Ningún sistema informático es absolutamente seguro. Consulte con el vendedor o fabricante de su sistema o acceda a https://www.intel.es para obtener más información. // El software y las cargas de trabajo utilizados para las pruebas de rendimiento pueden haber sido optimizados para el uso con microprocesadores Intel® exclusivamente. Las pruebas de rendimiento, como SYSmark y MobileMark, se han medido utilizando sistemas, componentes, software, operaciones y funciones informáticas específicas. Cualquier cambio realizado en cualquiera de estos factores puede hacer que los resultados varíen. Es conveniente consultar otras fuentes de información y pruebas de rendimiento que le ayudarán a evaluar a fondo sus posibles compras, incluido el rendimiento de un producto concreto en combinación con otros. Para obtener información más detallada, acceda a https://www.intel.es/benchmarks. // Los resultados de rendimiento se basan en pruebas realizadas en la fecha indicada en las configuraciones y es posible que no reflejen todas las actualizaciones de seguridad disponibles. Consulte la publicación de la configuración para obtener más información. Ningún producto o componente es completamente seguro. // Las situaciones de reducción de costes descritas están pensadas como ejemplos de cómo un producto equipado con Intel®, en las circunstancias y configuraciones especificadas, puede afectar a los costes futuros y suponer un ahorro. Las circunstancias variarán. Intel no garantiza ningún coste ni reducción de los costes. // Intel no ejerce control ni inspección algunos sobre los datos de análisis de rendimiento o los sitios web de terceros a los que se hace referencia en este documento. Debe visitar el sitio web referido y confirmar si los datos a los que se hacen referencia son precisos. // En algunos casos de prueba, los resultados se han estimado o simulado mediante un análisis interno de Intel o un modelado o simulación de arquitectura, y se le proporcionan con fines informativos. Cualquier diferencia en el hardware, software o configuración del sistema puede afectar al rendimiento real.

Información sobre productos y rendimiento

1

El sistema anterior, HLRN-III, consta de dos complejos ubicados en el ZIB de Berlín y en los Servicios de TI de la Universidad de Leibniz (LUIS) en Hannover, junto con una conexión dedicada de fibra óptica de 10 GigE para que la HLRN (Asociación alemana para la promoción de la computación de alto rendimiento) pueda conseguir la vista denominada de sistema único. Entregado en dos fases, los detalles del nodo informático incluyen lo siguiente: la primera fase incluía dos ordenadores Cray XC30 de 744 nodos de computación cada uno con un total de 1488 procesadores Intel® Xeon® E5-2695v2 de conexión doble con un total de 93 TB de memoria principal, conectados por medio de una red rápida Cray Aries con topología Dragonfly. La segunda fase añadió 2064 nodos de computación con procesadores Intel® Xeon® E5-2680 v3 con un total de 85248 núcleos de computación con 1872 nodos de computación en Berlín y 1680 nodos de computación en Hannover, totalizando 2,7 PetaFlops/segundo de rendimiento tope y 222 TB de memoria principal ampliada.