MPI

De Wiki de Calcul Québec
Aller à : Navigation, rechercher
Cette page est une traduction de la page MPI et la traduction est complétée à 100 % et à jour.

Autres langues :anglais 100% • ‎français 100%

Sommaire

Description

The Message Passing Interface (MPI) was conceived for programming on distributed memory architectures. MPI is a standard defining a set of functions for communication between jobs that are executed on one or more processors.

Generally MPI uses the SPMD (Single Program Multiple Data) technique to enable parallel execution. With this technique it is not necessary to write a different program for each processor, only one is sufficient. During the execution of an MPI program, multiple instances of the same program are simultaneously executed. Each instance is called an MPI process. Hence, every MPI process is the same program, but each copy does calculations on different data depending on its rank within all processes. Various communication functions are used to exchange data between MPI processes.

Basic commands

List of basic commands for MPI programming. These calls are sufficient for many cases encountered by MPI users.

MPI_Init Initialize MPI processes
MPI_Comm_size Returns the number of allocated processes
MPI_Comm_rank Returns the number of the process where the code is executed
MPI_Send Sends a message
MPI_Recv Receives a message
MPI_Bcast Diffuses data to all processes (broadcast)
MPI_Finalize Terminates MPI processes

How to call basic MPI commands in Fortran

In an MPI program written in Fortran, one of the following lines must appear at the start of all program units (main program, subprograms, functions, modules) that use MPI :

include 'mpif.h'

or

use mpi


Basic subroutines in Fortran :

CALL MPI_Init( integer ierr)
CALL MPI_Comm_size( integer comm, integer size, integer ierr )
CALL MPI_Comm_rank( integer comm, integer rank, integer ierr )
CALL MPI_Send( choice buf, integer count, integer datatype, integer dest, integer tag, integer comm, integer ierr )
CALL MPI_Recv( choice buf, integer count, integer datatype, integer source, integer tag, integer comm, integer status(MPI_STATUS_SIZE), integer ierr )
CALL MPI_Bcast( choice buf, integer count, integer datatype, integer root, integer comm, integer ierr )
CALL MPI_Finalize( integer ierr )

How to call basic MPI commands in C

The following line must appear at the start of an MPI program written in C :

#include "mpi.h"


Basic functions in C:

int MPI_Init( int *argc, int *argv )
int MPI_Comm_size( MPI_Comm comm, int *size )
int MPI_Comm_rank( MPI_Comm comm, int *rank )
int MPI_Send( void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm )
int MPI_Recv( void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_comm comm, MPI_Status *status )
int MPI_Bcast( void* buf, int count, MPI_Datatype datatype, int root, MPI_Comm comm )
int MPI_Finalize( void )

How to call basic MPI commands in C++

Since version 2.2 of the MPI standard, the C++ interface has been obsoleted. Even if it is still part of the main distributions, we do not recommend its use, because it may be deleted in the future. Please use the C interface instead. To know what this means for you, we suggest the following excellent blog post: The MPI C++ bindings are gone: what does it mean to you ?

If you would like to know the history of the C++ interface's disappearance, please consult this blog post.

Definition of parameters

ierr Returned error code (output). In C, the error code is returned by the function itself. argc,argv Arguments passed to the main function of a C program. Not modified by MPI.
comm The MPI communicator (for example: MPI_COMM_WORLD)
size The number of MPI processes associated to the communicator
rank The number of the MPI process within the group associated to a communicator
buf The address of the first element of the array (buffer) to transfer
count The number of elements to transfer
datatype The data type of the buffer's contents. You should use the types defined by MPI (MPI_INTEGER, MPI_REAL8, etc.)
source Rank where the data to be transferred comes from
dest Rank where the data to be transferred goes to
tag Label that identifies the message
root Rank from which a broadcast originates
status Structure describing the received message's state (array in Fortran)

Compilation

Compiling source code that uses MPI requires additional options compared to those for serial code. To simplify life for users, the library is distributed with scripts that are distributed with compilation scripts that add all the options required for using MPI when the compiler is called. Hence, to compile Fortran code, you use the command mpif90 or mpif77, to compile C code use mpicc, and to compile C++ use mpicxx.

This is why you often find more than one version of the desired library MVAPICH2 or Open MPI on Calcul Québec's systems. The module's name indicates if the underlying compiler is GCC, Intel's compiler or PGI's compiler. Note that the list of options that is passed to the MPI wrapper script is passed to the underlying compiler. So you should use the same options that are understood by the compiler. For example, if you use the Open MPI module with the Intel compiler, you can compile a C source code file as follows (note the option -xHost that is only available with Intel compilers):

[name@server $] mpicc -xHost -c monFichierSource.c


Execution

As explained before, MPI's method of parallel execution means that multiple instance of the same program are ran, but each one with different data and, generally, with communication between different instances. That means that the program must be launched by a command that indicates the number of instances, and sometimes some more information. The commands that are presently most commonly used are mpirun and mpiexec. We generally recommend using mpiexec for which usage is a bit more consistent from distribution to distribution. For example, you can run an MPI program with 4 instances as follows:

[nom@serveur $] mpiexec -n 4 myMPIExecutable [args]


where [args] are optional arguments passed to the executable. With current MPI versions, each instance has access to this argument list. To get to know the list of options specific to the MPI version that you use, please visit the Open MPI or MVAPICH2 page.

Detailed examples

Each example is available in Fortran and in C.

Example 1: Hello World: The origin of all examples

Example 2: Barrier: Another Hello World

Example 3: Broadcast: Collective communication

Example 4: Send-Recv: Point-to-point communication

Example 5 : Blocking Send-Recv : Point-to-point communication

Example 6: Non-blocking Isend-Irecv: Point-to-point communication

Example 7 : Scatter : Collective communication

Example 8: Scatterv: Collective communication

Example 9: Gather: Collective communication

Example 10: Allgather: Collective communication

Example 11: Alltoall: Collective communication

Example 12: Reduce: Collective communication

Exemple 13: MPI_Group: Creation of groups

Example 14: Pack: Using Pack

Example 15: Datatype: Creating derived data types

Example 16: Request: Master-slave method

Outils personnels
Espaces de noms

Variantes
Actions
Navigation
Ressources de Calcul Québec
Outils
Partager