Analysing the influence of InfiniBand choice on OpenMPI memory consumption

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Standard

Analysing the influence of InfiniBand choice on OpenMPI memory consumption. / Perks, O.; Beckingsale, D. A.; Dawes, A. S.; Herdman, J. A.; Mazauric, C.; Jarvis, S. A.

Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013. 2013. p. 186-193 6641412 (Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Harvard

Perks, O, Beckingsale, DA, Dawes, AS, Herdman, JA, Mazauric, C & Jarvis, SA 2013, Analysing the influence of InfiniBand choice on OpenMPI memory consumption. in Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013., 6641412, Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013, pp. 186-193, 2013 11th International Conference on High Performance Computing and Simulation, HPCS 2013, Helsinki, Finland, 1/07/13. https://doi.org/10.1109/HPCSim.2013.6641412

APA

Perks, O., Beckingsale, D. A., Dawes, A. S., Herdman, J. A., Mazauric, C., & Jarvis, S. A. (2013). Analysing the influence of InfiniBand choice on OpenMPI memory consumption. In Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013 (pp. 186-193). [6641412] (Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013). https://doi.org/10.1109/HPCSim.2013.6641412

Vancouver

Perks O, Beckingsale DA, Dawes AS, Herdman JA, Mazauric C, Jarvis SA. Analysing the influence of InfiniBand choice on OpenMPI memory consumption. In Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013. 2013. p. 186-193. 6641412. (Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013). https://doi.org/10.1109/HPCSim.2013.6641412

Author

Perks, O. ; Beckingsale, D. A. ; Dawes, A. S. ; Herdman, J. A. ; Mazauric, C. ; Jarvis, S. A. / Analysing the influence of InfiniBand choice on OpenMPI memory consumption. Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013. 2013. pp. 186-193 (Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013).

Bibtex

@inproceedings{447508282484415984382ca623bae8f8,
title = "Analysing the influence of InfiniBand choice on OpenMPI memory consumption",
abstract = "The ever increasing scale of modern high performance computing platforms poses challenges for system architects and code developers alike. The increase in core count densities and associated cost of components is having a dramatic effect on the viability of high memory-per-core ratios. Whilst the available memory per core is decreasing, the increased scale of parallel jobs is testing the efficiency of MPI implementations with respect to memory overhead. Scalability issues have always plagued both hardware manufacturers and software developers, and the combined effects can be disabling. In this paper we address the issue of MPI memory consumption with regard to InfiniBand network communications. We reaffirm some widely held beliefs regarding the existence of scalability problems under certain conditions. Additionally, we present results testing memory-optimised runtime configurations and vendor provided optimisation libraries. Using Orthrus, a linear solver benchmark developed by AWE, we demonstrate these memory-centric optimisations and their performance implications. We show the growth of OpenMPI memory consumption (demonstrating poor scalability) on both Mellanox and QLogic InfiniBand platforms. We demonstrate a 616× increase in MPI memory consumption for a 64× increase in core count, with a default OpenMPI configuration on Mellanox. Through the use of the Mellanox MXM and QLogic PSM optimisation libraries we are able to observe a 117× and 115× reduction in MPI memory at application memory high water mark. This significantly improves the potential scalability of the code.",
keywords = "HWM, InfiniBand, Memory, MPI, Parallel, Tools",
author = "O. Perks and Beckingsale, {D. A.} and Dawes, {A. S.} and Herdman, {J. A.} and C. Mazauric and Jarvis, {S. A.}",
year = "2013",
doi = "10.1109/HPCSim.2013.6641412",
language = "English",
isbn = "9781479908363",
series = "Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013",
pages = "186--193",
booktitle = "Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013",
note = "2013 11th International Conference on High Performance Computing and Simulation, HPCS 2013 ; Conference date: 01-07-2013 Through 05-07-2013",

}

RIS

TY - GEN

T1 - Analysing the influence of InfiniBand choice on OpenMPI memory consumption

AU - Perks, O.

AU - Beckingsale, D. A.

AU - Dawes, A. S.

AU - Herdman, J. A.

AU - Mazauric, C.

AU - Jarvis, S. A.

PY - 2013

Y1 - 2013

N2 - The ever increasing scale of modern high performance computing platforms poses challenges for system architects and code developers alike. The increase in core count densities and associated cost of components is having a dramatic effect on the viability of high memory-per-core ratios. Whilst the available memory per core is decreasing, the increased scale of parallel jobs is testing the efficiency of MPI implementations with respect to memory overhead. Scalability issues have always plagued both hardware manufacturers and software developers, and the combined effects can be disabling. In this paper we address the issue of MPI memory consumption with regard to InfiniBand network communications. We reaffirm some widely held beliefs regarding the existence of scalability problems under certain conditions. Additionally, we present results testing memory-optimised runtime configurations and vendor provided optimisation libraries. Using Orthrus, a linear solver benchmark developed by AWE, we demonstrate these memory-centric optimisations and their performance implications. We show the growth of OpenMPI memory consumption (demonstrating poor scalability) on both Mellanox and QLogic InfiniBand platforms. We demonstrate a 616× increase in MPI memory consumption for a 64× increase in core count, with a default OpenMPI configuration on Mellanox. Through the use of the Mellanox MXM and QLogic PSM optimisation libraries we are able to observe a 117× and 115× reduction in MPI memory at application memory high water mark. This significantly improves the potential scalability of the code.

AB - The ever increasing scale of modern high performance computing platforms poses challenges for system architects and code developers alike. The increase in core count densities and associated cost of components is having a dramatic effect on the viability of high memory-per-core ratios. Whilst the available memory per core is decreasing, the increased scale of parallel jobs is testing the efficiency of MPI implementations with respect to memory overhead. Scalability issues have always plagued both hardware manufacturers and software developers, and the combined effects can be disabling. In this paper we address the issue of MPI memory consumption with regard to InfiniBand network communications. We reaffirm some widely held beliefs regarding the existence of scalability problems under certain conditions. Additionally, we present results testing memory-optimised runtime configurations and vendor provided optimisation libraries. Using Orthrus, a linear solver benchmark developed by AWE, we demonstrate these memory-centric optimisations and their performance implications. We show the growth of OpenMPI memory consumption (demonstrating poor scalability) on both Mellanox and QLogic InfiniBand platforms. We demonstrate a 616× increase in MPI memory consumption for a 64× increase in core count, with a default OpenMPI configuration on Mellanox. Through the use of the Mellanox MXM and QLogic PSM optimisation libraries we are able to observe a 117× and 115× reduction in MPI memory at application memory high water mark. This significantly improves the potential scalability of the code.

KW - HWM

KW - InfiniBand

KW - Memory

KW - MPI

KW - Parallel

KW - Tools

UR - http://www.scopus.com/inward/record.url?scp=84888074458&partnerID=8YFLogxK

U2 - 10.1109/HPCSim.2013.6641412

DO - 10.1109/HPCSim.2013.6641412

M3 - Conference contribution

AN - SCOPUS:84888074458

SN - 9781479908363

T3 - Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013

SP - 186

EP - 193

BT - Proceedings of the 2013 International Conference on High Performance Computing and Simulation, HPCS 2013

T2 - 2013 11th International Conference on High Performance Computing and Simulation, HPCS 2013

Y2 - 1 July 2013 through 5 July 2013

ER -