MPI performance guidelines for scalability
DOI:
https://doi.org/10.26438/ijcse/v6si1.6065Keywords:
Performance guidelines for MPI functions, Scalability of MPI functions, High-performance computingAbstract
MPI (Message Passing Interface) is most widely used parallel programming paradigm. It is used for application development on small as well as large high-performance computing systems. MPI standard provides a specification for different functions but it does not specify any performance guarantee for implementations. Nowadays, its various implementations from both vendors and research groups are available. Users are expecting consistent performance from all implementations and on all platforms. In literature, performance guidelines are defined for MPI communication, IO functions and derived data types. By using these guidelines as a base we have defined guidelines for scalability of MPI communication functions. Also, we have verified these guidelines by using benchmark application and on different MPI implementations such as MPICH, open MPI. The experimental results show that point to point communication functions are scalable. It is quite obvious as in point to point communication the only pair of processes is involved. Hence these guidelines are defined as performance requirement by considering the semantics of these functions. All processes are involved in collective communication functions; therefore defining performance guidelines for collective communication is difficult. In this paper, we have defined the performance guidelines by considering the amount of data transferred in the function. Also, we have verified our defined guidelines and reasons for violations of these guidelines are elaborated.
References
A. Mallón, Guillermo L. Taboada, Carlos Teijeiro, Juan Touriño, Basilio B. Fraguela, Andrés Gómez, Ramón Doallo, J. Carlos Mouriño, “Performance Evaluation of MPI, UPC and OpenMP on Multicore Architectures”, Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2009. Lecture Notes in Computer Science, pp. 174–184, 2009.
William D. Gropp, Rajeev Thakur, “Self-consistent MPI performance guidelines”, IEEE Transaction on parallel and distributed systems, 2005.
William D. Gropp, Dries Kimpe, Robert Ross, Rajeev Thakur and Jesper Larsson Traff, “Self-consistent MPI-IO performance requirements and expectations”, Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2008. Lecture Notes in Computer Science, 2008.
William D. Gropp, Dries Kimpe, Robert Ross, Rajeev Thakur and Jesper Larsson Traff, “Performance Expectations and Guidelines for MPI Derived Datatypes”, Recent Advances in the Message Passing Interface. EuroMPI 2011. Lecture Notes in Computer Science, 2011.
Sascha Hunold, Alexandra Carpen-Amarie, Felix Donatus Lübbe, and Jesper Larsson Träff TU Wien, “Automatic verification of self-consistent MPI performance guidelines”, Parallel Processing, Euro-Par 2016. Lecture Notes in Computer Science, 2016.
Ralf Reussner, Peter Sanders, and Jesper Larsson Träff, “SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI,” Journal of Scientific Programming, vol. 10, issue 1, pp. 55-65, 2002.
WCE Rock Cluster, High performance computing cluster, URL: http://wce.ac.in/it/landing-page.php?id=9.
J. Liu, B. Chandrasekaran, W. Yu, J. Wu, D. Buntinas, S. Kini, P. Wyckoff, and D. K. Panda, “Micro-Benchmark Performance Comparison of High-Speed Cluster Interconnects” , Proceedings of 11th Symposium on High Performance Interconnects, 2003.
Hunold, S., Carpen-Amarie, A., “Reproducible MPI benchmarking is still not as easy as you think”, IEEE Transactions on Parallel and Distributed Systems , vol. 27, issue 12, 2016.
Subhash Saini, Robert Ciotti,Brian T. N. Gunney, Thomas E. Spelce, Alice Koniges, Don Dossa, Panagiotis Adamidis, Rolf Rabenseifner, Sunil R. Tiyyagura, Matthias Mueller, “Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks”, Journal of Computer and System Sciences, vol. 74, issue 6, 2008.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors contributing to this journal agree to publish their articles under the Creative Commons Attribution 4.0 International License, allowing third parties to share their work (copy, distribute, transmit) and to adapt it, under the condition that the authors are given credit and that in the event of reuse or distribution, the terms of this license are made clear.
