Analysing the influence of InfiniBand choice on OpenMPI memory consumption

O. Perks, D. A. Beckingsale, A. S. Dawes, J. A. Herdman, C. Mazauric, S. A. Jarvis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The ever increasing scale of modern high performance computing platforms poses challenges for system architects and code developers alike. The increase in core count densities and associated cost of components is having a dramatic effect on the viability of high memory-per-core ratios. Whilst the available memory per core is decreasing, the increased scale of parallel jobs is testing the efficiency of MPI implementations with respect to memory overhead. Scalability issues have always plagued both hardware manufacturers and software developers, and the combined effects can be disabling. In this paper we address the issue of MPI memory consumption with regard to InfiniBand network communications. We reaffirm some widely held beliefs regarding the existence of scalability problems under certain conditions. Additionally, we present results testing memory-optimised runtime configurations and vendor provided optimisation libraries. Using Orthrus, a linear solver benchmark developed by AWE, we demonstrate these memory-centric optimisations and their performance implications. We show the growth of OpenMPI memory consumption (demonstrating poor scalability) on both Mellanox and QLogic InfiniBand platforms. We demonstrate a 616× increase in MPI memory consumption for a 64× increase in core count, with a default OpenMPI configuration on Mellanox. Through the use of the Mellanox MXM and QLogic PSM optimisation libraries we are able to observe a 117× and 115× reduction in MPI memory at application memory high water mark. This significantly improves the potential scalability of the code.

Original languageEnglish
Title of host publicationProceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013
Pages186-193
Number of pages8
DOIs
Publication statusPublished - 2013
Event2013 11th International Conference on High Performance Computing and Simulation, HPCS 2013 - Helsinki, Finland
Duration: 1 Jul 20135 Jul 2013

Publication series

NameProceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013

Conference

Conference2013 11th International Conference on High Performance Computing and Simulation, HPCS 2013
Country/TerritoryFinland
CityHelsinki
Period1/07/135/07/13

Keywords

  • HWM
  • InfiniBand
  • Memory
  • MPI
  • Parallel
  • Tools

ASJC Scopus subject areas

  • Applied Mathematics
  • Modelling and Simulation

Fingerprint

Dive into the research topics of 'Analysing the influence of InfiniBand choice on OpenMPI memory consumption'. Together they form a unique fingerprint.

Cite this