Next Generation MPI Programming: Advanced MPI-2 and New Features in MPI-3 at ISC'16
Next Generation MPI Programming: Advanced MPI-2 and New Features in MPI-3
Presenters
Pavan Balaji, Argonne National Laboratory
Torsten Hoefler, ETH Zürich
Host
Abstract
The Message Passing Interface (MPI) has been the de facto standard for
parallel programming for nearly two decades now. However, a vast
majority of applications only rely on basic MPI-1 features without
taking advantage of the rich set of functionality the rest of the
standard provides. Further, with the advent of MPI-3 (released in
September 2012), a vast number of new features are being introduced in
MPI, including efficient one-sided communication, support for external
tools, non-blocking collective operations, and improved support for
topology-aware data movement. The upcoming MPI-4 standard aims at
introducing further improvements to the standard in a number of aspects.
This is an advanced-level tutorial that will provide an overview of
various powerful features in MPI, especially with MPI-2 and MPI-3, and
will present a brief preview into what is being planned for MPI-4.
Detailed Description
Overview and Goals
This tutorial will introduce advanced MPI-2.2 and new MPI-3.0 functions
and concepts. We will demonstrate use-cases and examples for the
discussed concepts. New concepts such as nonblocking or neighborhood
collective communication are intended to raise the level of abstraction
to allow the user to specify ``what is done'' and not ``how it is done''
of his algorithms to the library which can then optimize to the
particular target architecture. Existing concepts, such as MPI datatypes
can often be used to improve application performance and programmer
productivity.
The attendees will understand how the concepts are intended to function
and under which circumstances the functions should be used or when they
should not be used. We will also discuss MPI algorithms for advanced
communication problems.
The new MPI-3.0 standard will introduce new interfaces to query and manipulate
both configuration and performance data through the newly defined MPI tool
information interface. We will provide a first look at its capabilities and provide
a series of use cases for usage both inside of applications as well as tools.
Additionally, MPI-3 includes s series of small updates, changes and additions, which we
will summarize for the attendees along with a description of their impact and
consequences for application developers. We intend for this section to raise awareness for minor
features often overshadowed by the major updates, but still provide powerful
additions for the optimization or efficient design of MPI codes. We will further briefly discuss open
proposals for future MPI versions as well as the process that's behind the
standardization of MPI.
Targeted audience
Industry (ISVs), academics, researchers, developers
Content level
50% Intermediate + 50% Advanced
We assume that people are familiar with MPI basics and that they have used MPI before. However, several
new features target the intermediate level while others will be more advanced.
Audience prerequisites
Familiarity with C (or Fortran). Some familiarity with MPI (used it before and understand basic concepts).
Basic concepts of distributed memory parallel programming.
Slides: hoefler-balaji-isc16-advanced-mpi.pdf - (5934.49 kb)
|