Non-blocking Isend-Irecv

De Wiki de Calcul Québec
Aller à : Navigation, rechercher
Cette page est une traduction de la page Isend-Irecv non bloquant et la traduction est complétée à 100 % et à jour.

Autres langues :anglais 100% • ‎français 100%

Examples 5 and 6 show the difference between the functions MPI_Send/MPI_Recv and MPI_Isend/MPI_Irecv.

Each process transfers a vector of random numbers (sendbuff) to the next process (taskid+1), that receives it in recvbuff. The last process transfers it to process 0. The size of the vector (buffsize) is given as an argument to the program at run time. This example shows that MPI_Isend and MPI_Irecv are much more appropriate than MPI_Send and MPI_Recv to accomplish this work. The functions MPI_Isend and MPI_Irecv are non-blocking, meaning that the function call returns before the communication is completed. Hence you can put MPI_Isend before MPI_Recv without fearing deadlocks (meaning that the process waits forever for a message that never comes, because the process it comes from also waits for a message with MPI_Recv before the MPI_Send!!).

Because the functions MPI_Isend and MPI_Irecv do not wait until communication is completed, you should place an MPI_Wait call for each send or receive you want to be completed before advancing in the program. This function, like its name shows, waits until the communications are completed before continuing. In this example, each process posts a send request (MPI_Isend) and a receive request (MPI_Irecv). It is clear that using non-blocking calls in this example, all the exchanges between the processes occur at the same time.

Before the communication, each process computes the sum of the vector to send. The results are gathered by task 0 (using MPI_Gather), which prints them out. Similarly after the communication, each process computes the sum of the received vector, and process 0 gathers all the sums and prints them out along with the communication times. Example 5 shows how to use MPI_Send and MPI_Recv to accomplish the same work. Try to compare both!

In Fortran

File : isend-irecv.f
!--------------------------------------------------------
!  
!              sendbuff   recvbuff       sendbuff   recvbuff
!                                         
!              ########   ########       ########   ########
!              #      #   #      #       #      #   #      #
!          0   #  AA  #   #      #       #  AA  #   #  EE  #
!              #      #   #      #       #      #   #      #
!              ########   ########       ########   ########
!     T        #      #   #      #       #      #   #      #
!          1   #  BB  #   #      #       #  BB  #   #  AA  #
!     a        #      #   #      #       #      #   #      #
!              ########   ########       ########   ########
!     s        #      #   #      #       #      #   #      #
!          2   #  CC  #   #      #       #  CC  #   #  BB  #
!     k        #      #   #      #       #      #   #      #
!              ########   ########       ########   ########
!     s        #      #   #      #       #      #   #      #
!          3   #  DD  #   #      #       #  DD  #   #  CC  #
!              #      #   #      #       #      #   #      #
!              ########   ########       ########   ########
!              #      #   #      #       #      #   #      #
!          4   #  EE  #   #      #       #  EE  #   #  DD  #
!              #      #   #      #       #      #   #      #
!              ########   ########       ########   ########
!              
!                     BEFORE                    AFTER
!
! Author: Carol Gauthier
!        Centre de calcul scientifique
!        Université de Sherbrooke
!
! Last revision: August 2004
!--------------------------------------------------------
Program Example_MPI
 
  include 'mpif.h'
  integer ierr,ntasks,taskid,itask,status(MPI_STATUS_SIZE)
  integer i,j,k,buffsize,sentid,send_request,recv_request
  character argtmp*12
  real(8) inittime,recvtime,totaltime,rand,buffsum
 
  real(8),allocatable,dimension(:) :: recvtimes
  real(8),allocatable,dimension(:) :: buffsums
 
  real(8),allocatable,dimension(:) :: sendbuff
  real(8),allocatable,dimension(:) :: recvbuff
 
  !---------------------------------------------------------------
  ! MPI Initialisation. It's important to put this call at the
  ! beginning of the program, after variable declarations.
  call MPI_INIT( ierr )
 
  !---------------------------------------------------------------
  ! Get the number of MPI processes and the taskid of this process.
  call MPI_COMM_SIZE(MPI_COMM_WORLD,ntasks,ierr)
  call MPI_COMM_RANK(MPI_COMM_WORLD,taskid,ierr)
 
  !---------------------------------------------------------------
  ! Get buffsize value from program arguments.
  call getarg(1,argtmp)
  read(argtmp,'(I12)')buffsize
 
  !---------------------------------------------------------------
  ! Printing out the description of the example.
  if ( taskid.eq.0 )then
    write(6,'(A)')
    write(6,'(A)')"##########################################################"
    write(6,'(A)')
    write(6,'(A)')" Example 6"
    write(6,'(A)')
    write(6,'(A)')" Point-to-point communication: MPI_Isend MPI_Irecv"
    write(6,'(A)')
    write(6,'(A,I12)')" Vector size:",buffsize
    write(6,'(A,I5)')" Number of processes:",ntasks
    write(6,'(A)')
    write(6,'(A)')"##########################################################"
    write(6,'(A)')
    write(6,'(A)')"                --> BEFORE COMMUNICATION <--"
    write(6,'(A)')
  endif
 
   !---------------------------------------------------------------
   ! Memory allocation. 
    allocate( sendbuff(0:buffsize-1) )
    allocate( recvbuff(0:buffsize-1) )
    allocate( recvtimes(0:ntasks-1) )
    allocate( buffsums(0:ntasks-1) )
 
   !-----------------------------------------------------------------
   ! Vectors and matrices initialisation.
   call srand(taskid*10)
   do i=0,buffsize-1
    sendbuff(i)=rand();
   end do
 
   !-----------------------------------------------------------------
   ! Print out before communication.
 
   buffsum=0.0
   do i=0,buffsize-1
      buffsum=buffsum+sendbuff(i)
   end do
 
   call MPI_Gather(buffsum,1,MPI_REAL8,buffsums,1, MPI_REAL8,0,MPI_COMM_WORLD,ierr)
 
   if(taskid.eq.0)then
    do itask=0,ntasks-1
      sentid=itask+1
      if(itask.eq.(ntasks-1))sentid=0
      write(6,'(A,I3,A,I3,A,E14.8)')  &
      &       "Process",itask,": Sum of vector sent to ",sentid," =", buffsums(itask)
    end do
   end if
 
   !-----------------------------------------------------------------
   ! Communication
 
   inittime = MPI_Wtime();
 
   if ( taskid.eq.0 )then
     call MPI_Irecv(recvbuff,buffsize,MPI_REAL8,ntasks-1,MPI_ANY_TAG, &
     &             MPI_COMM_WORLD,recv_request,ierr)
     recvtime = MPI_Wtime()
     call MPI_Isend(sendbuff,buffsize,MPI_REAL8,taskid+1,0,MPI_COMM_WORLD,send_request,ierr)  
   else if( taskid.eq.(ntasks-1) )then
     call MPI_Isend(sendbuff,buffsize,MPI_REAL8,0,0,MPI_COMM_WORLD,send_request,ierr)
     call MPI_Irecv(recvbuff,buffsize,MPI_REAL8,taskid-1,MPI_ANY_TAG, &
     &             MPI_COMM_WORLD,recv_request,ierr)
     recvtime = MPI_Wtime()
   else
     call MPI_Irecv(recvbuff,buffsize,MPI_REAL8,taskid-1,MPI_ANY_TAG, &
     &             MPI_COMM_WORLD,recv_request,ierr)
     recvtime = MPI_Wtime()
     call MPI_Isend(sendbuff,buffsize,MPI_REAL8,taskid+1,0,MPI_COMM_WORLD,send_request,ierr)
   end if
 
   call MPI_Wait(send_request,status,ierr);
   call MPI_Wait(recv_request,status,ierr);
 
   totaltime=MPI_Wtime()
 
   !-----------------------------------------------------------------
   ! Print out after communication.
 
   buffsum=0.0
   do i=0,buffsize-1
      buffsum=buffsum+recvbuff(i)
   end do
   if ( taskid.eq.0 )then
    do itask=0,ntasks-1
      buffsums(itask)=0.0
    end do     
   end if
 
   call MPI_Gather(buffsum,1,MPI_REAL8,buffsums,1, MPI_REAL8,0,MPI_COMM_WORLD,ierr)
 
   call MPI_Gather(recvtime,1,MPI_REAL8,recvtimes,1, MPI_REAL8,0,MPI_COMM_WORLD,ierr)
 
  if(taskid.eq.0)then
        write(6,*)
    write(6,'(A)')"##########################################################"
    write(6,*)
    write(6,'(A)')"                --> AFTER COMMUNICATION <--"
    write(6,*)
    do itask=0,ntasks-1
      write(6,'(A,I3,A,F5.2,A,e14.8)')"Process ",itask,": Vector received at",recvtimes(itask), &
      &                             " seconds: Sum = ",buffsums(itask)
    end do
    write(6,*)
    write(6,'(A)')"##########################################################"
    write(6,'(A,F5.2,A)')" Communication Time: ",totaltime," seconds"
    write(6,'(A)')"##########################################################"
    write(6,*)
  end if
 
  !-----------------------------------------------------------------
  ! Free the allocated memory
  deallocate(sendbuff)
  deallocate(recvbuff)
  deallocate(recvtimes)
  deallocate(buffsums)
 
  !-----------------------------------------------------------------
  ! MPI finalisation
 
  call MPI_FINALIZE( ierr )
 
end


In C

File : isend_irecv.c
/*--------------------------------------------------------
 
              sendbuff   recvbuff       sendbuff   recvbuff
 
              ########   ########       ########   ########
              #      #   #      #       #      #   #      #
          0   #  AA  #   #      #       #  AA  #   #  EE  #
              #      #   #      #       #      #   #      #
              ########   ########       ########   ########
     T        #      #   #      #       #      #   #      #
          1   #  BB  #   #      #       #  BB  #   #  AA  #
     a        #      #   #      #       #      #   #      #
              ########   ########       ########   ########
     s        #      #   #      #       #      #   #      #
          2   #  CC  #   #      #       #  CC  #   #  BB  #
     k        #      #   #      #       #      #   #      #
              ########   ########       ########   ########
     s        #      #   #      #       #      #   #      #
          3   #  DD  #   #      #       #  DD  #   #  CC  #
              #      #   #      #       #      #   #      #
              ########   ########       ########   ########
              #      #   #      #       #      #   #      #
          4   #  EE  #   #      #       #  EE  #   #  DD  #
              #      #   #      #       #      #   #      #
              ########   ########       ########   ########
 
                     BEFORE                    AFTER
 
 Author: Carol Gauthier
         Centre de calcul scientifique
         Université de Sherbrooke
 
 Last revision: September 2005
-------------------------------------------------------- */
 
#include <malloc.h>
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include "math.h"
#include "mpi.h"
 
int main(int argc,char** argv)
{
   /*===============================================================*/
   /* Declaration of variables                                     */
   int          taskid, ntasks;
   MPI_Status   status;
   MPI_Request	send_request,recv_request;
   int          ierr,i,j,itask,recvtaskid;
   int	        buffsize;
   double       *sendbuff,*recvbuff;
   double       sendbuffsum,recvbuffsum;
   double       sendbuffsums[1024],recvbuffsums[1024];
   double       inittime,totaltime,recvtime,recvtimes[1024];
 
   /*===============================================================*/
   /* MPI Initialisation. It is important to put this call at the   */
   /* beginning of the program, after variable declarations.        */
   MPI_Init(&argc, &argv);
 
   /*===============================================================*/
   /* Get the number of MPI processes and the taskid of this process.      */
   MPI_Comm_rank(MPI_COMM_WORLD,&taskid);
   MPI_Comm_size(MPI_COMM_WORLD,&ntasks);
 
   /*===============================================================*/
   /* Get buffsize value from program arguments.                    */
   buffsize=atoi(argv[1]);
 
   /*===============================================================*/
   /* Printing out the description of the example.                  */
   if ( taskid == 0 ){
     printf("\n\n\n");
     printf("##########################################################\n\n");
     printf(" Example 6 \n\n");
     printf(" Point-to-point communication: MPI_Isend MPI_Irecv \n\n");
     printf(" Vector size: %d\n",buffsize);
     printf(" Number of tasks: %d\n\n",ntasks);
     printf("##########################################################\n\n");
     printf("                --> BEFORE COMMUNICATION <--\n\n");
   }
 
   /*=============================================================*/
   /* Memory allocation.                                          */ 
   sendbuff=(double *)malloc(sizeof(double)*buffsize);
   recvbuff=(double *)malloc(sizeof(double)*buffsize);
 
   /*=============================================================*/
   /* Vectors or matrices initialisation.                      */
   srand((unsigned)time( NULL ) + taskid);
   for(i=0;i<buffsize;i++){
     sendbuff[i]=(double)rand()/RAND_MAX;
   }
 
   /*==============================================================*/
   /* Print out before communication.                              */
 
   sendbuffsum=0.0;
   for(i=0;i<buffsize;i++){
     sendbuffsum += sendbuff[i];
   }   
   ierr=MPI_Gather(&sendbuffsum,1,MPI_DOUBLE,
                   sendbuffsums,1, MPI_DOUBLE,
                   0,MPI_COMM_WORLD);                  
   if(taskid==0){
     for(itask=0;itask<ntasks;itask++){
       recvtaskid=itask+1;
       if(itask==(ntasks-1))recvtaskid=0;
       printf("Process %d: sum of vector sent to %d = %e\n",
               itask,recvtaskid,sendbuffsums[itask]);
     }  
   }
 
   /*===============================================================*/
   /* Communication.                                                */
 
   inittime = MPI_Wtime();
 
   if ( taskid == 0 ){
     ierr=MPI_Isend(sendbuff,buffsize,MPI_DOUBLE,
	           taskid+1,0,MPI_COMM_WORLD,&send_request);   
     ierr=MPI_Irecv(recvbuff,buffsize,MPI_DOUBLE,
	           ntasks-1,MPI_ANY_TAG,MPI_COMM_WORLD,&recv_request);
     recvtime = MPI_Wtime();
   }
   else if( taskid == ntasks-1 ){
     ierr=MPI_Isend(sendbuff,buffsize,MPI_DOUBLE,
	           0,0,MPI_COMM_WORLD,&send_request);   
     ierr=MPI_Irecv(recvbuff,buffsize,MPI_DOUBLE,
	           taskid-1,MPI_ANY_TAG,MPI_COMM_WORLD,&recv_request);
     recvtime = MPI_Wtime();
   }
   else{
     ierr=MPI_Isend(sendbuff,buffsize,MPI_DOUBLE,
	           taskid+1,0,MPI_COMM_WORLD,&send_request);
     ierr=MPI_Irecv(recvbuff,buffsize,MPI_DOUBLE,
	           taskid-1,MPI_ANY_TAG,MPI_COMM_WORLD,&recv_request);
     recvtime = MPI_Wtime();
   }
   ierr=MPI_Wait(&send_request,&status);
   ierr=MPI_Wait(&recv_request,&status);
 
   totaltime = MPI_Wtime() - inittime;
 
   /*===============================================================*/
   /* Print out after communication.                                */
 
   recvbuffsum=0.0;
   for(i=0;i<buffsize;i++){
     recvbuffsum += recvbuff[i];
   }   
 
   ierr=MPI_Gather(&recvbuffsum,1,MPI_DOUBLE,
                   recvbuffsums,1, MPI_DOUBLE,
                   0,MPI_COMM_WORLD);
 
   ierr=MPI_Gather(&recvtime,1,MPI_DOUBLE,
                   recvtimes,1, MPI_DOUBLE,
                   0,MPI_COMM_WORLD);
 
   if(taskid==0){
     printf("##########################################################\n\n");
     printf("                --> AFTER COMMUNICATION <-- \n\n");
     for(itask=0;itask<ntasks;itask++){
       printf("Process %d: Sum of received vector= %e: Time=%f seconds\n",
               itask,recvbuffsums[itask],recvtimes[itask]);
     }  
     printf("\n");
     printf("##########################################################\n\n");
     printf(" Communication time: %f seconds\n\n",totaltime);  
     printf("##########################################################\n\n");
   }
 
   /*===============================================================*/
   /* Free the allocated memory.                                    */
   free(recvbuff);
   free(sendbuff);
 
   /*===============================================================*/
   /* MPI finalisation.                                             */
   MPI_Finalize();
}


Outils personnels
Espaces de noms

Variantes
Actions
Navigation
Ressources de Calcul Québec
Outils
Partager