The High-Performance Linpack (HPL) Evaluation on MIHIR High Performance Computing Facility at NCMRWF
DOI:
https://doi.org/10.26438/ijcse/v13i3.18Keywords:
HPL, HPC, Aries interconnect, MIHIR, NCMRWF, PFLOPSAbstract
The National Centre for Medium Range and Weather Forecasting (NCMRWF) has, the MIHIR High Performance Computing (HPC) Facility with total computing capacity of 2.8 Petaflops to run Numerical Weather Prediction (NWP) models, enabling accurate and timely weather forecasting. These models need computations performed in PFLOPS (Peta Floating Point Operations Per Second). The HPC nodes are interconnected by the high speed, low latency Cray Aries network. High Performance Linpack (HPL) version 2.3 has been compiled and installed on the system for the study. The purpose of running HPL is to demonstrate the current computing performance of the HPC, and to assess the systems efficiency by analyzing the actual calculated performance (Rmax) and the theoretical peak performance (Rpeak, derived from system specifications). We present a performance evaluation of the HPL benchmark on MIHIR, conducting a detailed analysis of HPL parameters to optimize performance. The aim is to identify the best-optimized parameter values for MIHIR and determine the maximum achievable performance of the compute nodes, utilizing up to 300 nodes available for research.
References
[1] A. Petitet, R. C. Whaley, J. Dongarra, A. Cleary, “HPL - A Portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers,” Innovative Computing Laboratory, University of Tennessee, pp.1-10, 2018.
[2] A. Smith, B. Johnson, "Performance Analysis of HPL on Intel`s Icelake Architecture," Proceedings of the International Conference on High-Performance Computing Systems, pp.123-130, 2021.
[3] C. Lawson, R. Hanson, D. Kincaid, and F. Krogh, “Basic Linear Algebra Subprograms for Fortran usage,” ACM Transactions on Mathematical Software, Vol.5, Issue.4, pp.308–323, 1979.
[4] I. M. Jelas, N. A. W. A. Hamid, M. Othman, “The High-Performance Linpack (HPL) Benchmark on the Khaldun Sandbox Cluster,” Journal of High-Performance Computing, pp.1-15, 2013.
[5] Intel Corporation, "Intel® Math Kernel Library: Reference Manual," 2020.
[6] J. J. Dongarra, J. R. Bunch, G. B. Moler, G. W. Stewart, “LINPACK Users` Guide,” Society for Industrial and Applied Mathematics (SIAM), USA, pp.1-250, 1979.
[7] J. J. Dongarra, P. Luszczek, A. Petitet, “The LINPACK Benchmark: Past, Present, and Future,” Concurrency and Computation: Practice & Experience, Vol.15, pp.803-820, 2003.
[8] J. J. Dongarra, H. W. Meuer, E. Strohmaier, "TOP500 Supercomputer Sites," International Journal of High Performance Computing Applications, Vol.11, No.3, pp.90-94, 1997.
[9] Khang T. Nguyen, "Performance Comparison of OpenBLAS and Intel oneAPI Math Kernel Library in R," International Journal of Computational Science and Engineering, Vol.5, No.3, pp.123-130, 2020.
[10] M. Snir, S. Otto, S. Huss-Lederman, D. Walker, J. J. Dongarra, “MPI: The Complete Reference,” MIT Press, USA, pp.1-500, 1996.
[11] M. Fatica, "Accelerating Linpack with CUDA on Heterogeneous Clusters," Proceedings of the 2nd Workshop on General-Purpose Processing on Graphics Processing Units (GPGPU-2), pp.46-51, 2009.
[12] Wong Chun Shiang, Izzatdin Abdul Aziz, Nazleeni Samiha Haron, Jafreezal Jaafar, Norzatul Natrah Ismail, Mazlina Mehat, “The High-Performance Linpack (HPL) Benchmark Evaluation on UTP High-Performance Cluster Computing,” Jurnal Teknologi, Vol.78, Issue.9, pp.21–30, 2016.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors contributing to this journal agree to publish their articles under the Creative Commons Attribution 4.0 International License, allowing third parties to share their work (copy, distribute, transmit) and to adapt it, under the condition that the authors are given credit and that in the event of reuse or distribution, the terms of this license are made clear.
