Blocking Send-Recv

De Wiki de Calcul Québec
Aller à : Navigation, rechercher
Cette page est une traduction de la page Send-Recv bloquant et la traduction est complétée à 100 % et à jour.

Autres langues :anglais 100% • ‎français 100%

Examples 5 and 6 show the difference between the functions MPI_Send/MPI_Recv and MPI_Isend/MPI_Irecv.

Each process transfers a vector of random numbers (sendbuff) of dimension buffsize to the next process (taskid+1) that receives it in recvbuff. The last process transfers it to process 0. The size of the vector (buffsize) is given as an argument to the program at run time. This example shows that MPI_Send and MPI_Recv are not the most efficient functions to perform this work, since they work in blocking mode (i.e. wait for the transfer to finish before continuing their execution). In order to receive their vector, each process must post an MPI_Recv corresponding to the MPI_Send of the sender, and so wait for the reception to complete before sending sendbuff to the next process. In order to avoid a deadlock, one of the process must initiate the communication and post its MPI_Recv after the MPI_Send. In this example, the last process initiates this cascading process where each consecutive process is waiting for the complete reception from the preceding process before starting to send sendbuff to the next. The whole process completes when the last process has finished its reception.

Before the communication, each process computes the sum of the vector to send. Similarly after the communication, process 0 gathers all the sums of the vectors received by each process and prints them out along with the communication times. Example 6 shows how to use non-blocking communication (MPI_Isend and MPI_Irecv) to accomplish the same work in much less time.

In Fortran

File : send_recv.f
! --------------------------------------------------------
!   
!              sendbuff   recvbuff       sendbuff   recvbuff
!                                         
!              ########   ########       ########   ########
!              #      #   #      #       #      #   #      #
!          0   #  AA  #   #      #       #  AA  #   #  EE  #
!              #      #   #      #       #      #   #      #
!              ########   ########       ########   ########
!     T        #      #   #      #       #      #   #      #
!          1   #  BB  #   #      #       #  BB  #   #  AA  #
!     a        #      #   #      #       #      #   #      #
!              ########   ########       ########   ########
!     s        #      #   #      #       #      #   #      #
!          2   #  CC  #   #      #       #  CC  #   #  BB  #
!     k        #      #   #      #       #      #   #      #
!              ########   ########       ########   ########
!     s        #      #   #      #       #      #   #      #
!          3   #  DD  #   #      #       #  DD  #   #  CC  #
!              #      #   #      #       #      #   #      #
!              ########   ########       ########   ########
!              #      #   #      #       #      #   #      #
!          4   #  EE  #   #      #       #  EE  #   #  DD  #
!              #      #   #      #       #      #   #      #
!              ########   ########       ########   ########
!              
!                     BEFORE                    AFTER
!
! Author: Carol Gauthier
!         Centre de calcul scientifique
!         Université de Sherbrooke
!
! Last revision: August 2004
! --------------------------------------------------------
Program Example_MPI
 
  include 'mpif.h'
  integer ierr,ntasks,taskid,itask,status(MPI_STATUS_SIZE)
  integer i,j,k,buffsize,sentid
  character argtmp*12
  real(8) inittime,recvtime,totaltime,rand,buffsum
 
  real(8),allocatable,dimension(:) :: recvtimes
  real(8),allocatable,dimension(:) :: buffsums
 
  real(8),allocatable,dimension(:) :: sendbuff
  real(8),allocatable,dimension(:) :: recvbuff
 
  !---------------------------------------------------------------
  ! MPI Initialisation. It's important to put this call at the
  ! beginning of the program, after variable declarations. 
  call MPI_INIT( ierr )
 
  !---------------------------------------------------------------
  ! Get the number of MPI processes and the taskid of this process.
  call MPI_COMM_SIZE(MPI_COMM_WORLD,ntasks,ierr)
  call MPI_COMM_RANK(MPI_COMM_WORLD,taskid,ierr)
 
  !---------------------------------------------------------------
  ! Get buffsize value from program arguments.
  call getarg(1,argtmp)
  read(argtmp,'(I12)')buffsize
 
   !---------------------------------------------------------------
   ! Printing out the description of the example.
   if ( taskid.eq.0 )then
     write(6,'(A)')
    write(6,'(A)')"##########################################################"
    write(6,'(A)')
    write(6,'(A)')" Example 5"
    write(6,'(A)')
    write(6,'(A)')" Point-to-point Communication: MPI_Send MPI_Recv"
    write(6,'(A)')
    write(6,'(A,I12)')" Vector size:",buffsize
    write(6,'(A,I5)')" Number of processes:",ntasks
    write(6,'(A)')
    write(6,'(A)')"##########################################################"
    write(6,'(A)')
    write(6,'(A)')"                --> BEFORE COMMUNICATION <--"
    write(6,'(A)')
   endif
 
   !---------------------------------------------------------------
   ! Memory allocation. 
    allocate( sendbuff(0:buffsize-1) )
    allocate( recvbuff(0:buffsize-1) )
    allocate( recvtimes(0:ntasks-1) )
    allocate( buffsums(0:ntasks-1) )
 
   !-----------------------------------------------------------------
   ! Vectors and matrices initialisation.
   call srand(taskid*10)
   do i=0,buffsize-1
    sendbuff(i)=rand();
   end do
 
   !-----------------------------------------------------------------
   ! Print out before communication.
   buffsum=0.0
   do i=0,buffsize-1
      buffsum=buffsum+sendbuff(i)
   end do
 
   call MPI_Gather(buffsum,1,MPI_REAL8,buffsums,1, MPI_REAL8,0,MPI_COMM_WORLD,ierr)
 
   if(taskid.eq.0)then
    do itask=0,ntasks-1
      sentid=itask+1
      if(itask.eq.(ntasks-1))sentid=0
      write(6,'(A,I3,A,I3,A,E14.8)')  &
      &       "Process",itask,": Sum of vector sent to ",sentid," =", buffsums(itask)
    end do
   end if
 
   !-----------------------------------------------------------------
   ! Communication
 
   inittime = MPI_Wtime();
 
   if ( taskid.eq.0 )then
     call MPI_Recv(recvbuff,buffsize,MPI_REAL8,ntasks-1,MPI_ANY_TAG, &
     &             MPI_COMM_WORLD,status,ierr)
     recvtime = MPI_Wtime()
     call MPI_Send(sendbuff,buffsize,MPI_REAL8,taskid+1,0,MPI_COMM_WORLD,ierr)  
   else if( taskid.eq.(ntasks-1) )then
     call MPI_Send(sendbuff,buffsize,MPI_REAL8,0,0,MPI_COMM_WORLD,ierr)
     call MPI_Recv(recvbuff,buffsize,MPI_REAL8,taskid-1,MPI_ANY_TAG, &
     &             MPI_COMM_WORLD,status,ierr)
     recvtime = MPI_Wtime()
   else
     call MPI_Recv(recvbuff,buffsize,MPI_REAL8,taskid-1,MPI_ANY_TAG, &
     &             MPI_COMM_WORLD,status,ierr)
     recvtime = MPI_Wtime()
     call MPI_Send(sendbuff,buffsize,MPI_REAL8,taskid+1,0,MPI_COMM_WORLD,ierr)
   end if
 
   call MPI_Barrier(MPI_COMM_WORLD,ierr)
 
   totaltime=MPI_Wtime()
 
   !-----------------------------------------------------------------
   ! Print out after communication.
   buffsum=0.0
   do i=0,buffsize-1
      buffsum=buffsum+recvbuff(i)
   end do
   if ( taskid.eq.0 )then
    do itask=0,ntasks-1
      buffsums(itask)=0.0
    end do     
   end if
 
   call MPI_Gather(buffsum,1,MPI_REAL8,buffsums,1, MPI_REAL8,0,MPI_COMM_WORLD,ierr)
 
   call MPI_Gather(recvtime,1,MPI_REAL8,recvtimes,1, MPI_REAL8,0,MPI_COMM_WORLD,ierr)
 
  if(taskid.eq.0)then
    write(6,*)
    write(6,'(A)')"##########################################################"
    write(6,*)
    write(6,'(A)')"                --> AFTER COMMUNICATION <--"
    write(6,*)
    do itask=0,ntasks-1
      write(6,'(A,I3,A,F5.2,A,e14.8)')"Process ",itask,": Vector received at",recvtimes(itask), &
      &                             " seconds: Sum = ",buffsums(itask)
    end do
    write(6,*)
    write(6,'(A)')"##########################################################"
    write(6,'(A,F5.2,A)')" Communication Time: ",totaltime," seconds"
    write(6,'(A)')"##########################################################"
    write(6,*)
  end if
 
  !-----------------------------------------------------------------
  ! Free the allocated memory
  deallocate(sendbuff)
  deallocate(recvbuff)
  deallocate(recvtimes)
  deallocate(buffsums)
 
  !-----------------------------------------------------------------
  ! MPI finalisation
 
  call MPI_FINALIZE( ierr )
 
end


In C

File : send_recv.c
/* --------------------------------------------------------
 
              sendbuff   recvbuff       sendbuff   recvbuff
 
              ########   ########       ########   ########
              #      #   #      #       #      #   #      #
          0   #  AA  #   #      #       #  AA  #   #  EE  #
              #      #   #      #       #      #   #      #
              ########   ########       ########   ########
     T        #      #   #      #       #      #   #      #
          1   #  BB  #   #      #       #  BB  #   #  AA  #
     a        #      #   #      #       #      #   #      #
              ########   ########       ########   ########
     s        #      #   #      #       #      #   #      #
          2   #  CC  #   #      #       #  CC  #   #  BB  #
     k        #      #   #      #       #      #   #      #
              ########   ########       ########   ########
     s        #      #   #      #       #      #   #      #
          3   #  DD  #   #      #       #  DD  #   #  CC  #
              #      #   #      #       #      #   #      #
              ########   ########       ########   ########
              #      #   #      #       #      #   #      #
          4   #  EE  #   #      #       #  EE  #   #  DD  #
              #      #   #      #       #      #   #      #
              ########   ########       ########   ########
 
                     BEFORE                    AFTER
 
 Author: Carol Gauthier
         Centre de calcul scientifique
         Université de Sherbrooke
 
 Last revision: September 2005
-------------------------------------------------------- */
#include <malloc.h>
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include "math.h"
#include "mpi.h"
 
int main(int argc,char** argv)
{
   /*===============================================================*/
   /* Declaration of variables                                     */
   int          taskid, ntasks;
   MPI_Status   status;
   MPI_Request	req[1024];
   int          ierr,i,j,itask;
   int	        buffsize,recvtaskid;
   double       *sendbuff,*recvbuff;
   double       sendbuffsum,recvbuffsum;
   double       sendbuffsums[1024],recvbuffsums[1024];
   double       inittime,totaltime,recvtime,recvtimes[1024];
 
   /*===============================================================*/
   /* MPI Initialisation. It is important to put this call at the   */
   /* beginning of the program, after variable declarations.        */
   MPI_Init(&argc, &argv);
 
   /*===============================================================*/
   /* Get the number of MPI processes and the taskid of this process.      */
   MPI_Comm_rank(MPI_COMM_WORLD,&taskid);
   MPI_Comm_size(MPI_COMM_WORLD,&ntasks);
 
   /*===============================================================*/
   /* Get buffsize value from program arguments.                    */
   buffsize=atoi(argv[1]);
 
   /*===============================================================*/
   /* Printing out the description of the example.                  */
   if ( taskid == 0 ){
          printf("\n\n\n");
     printf("##########################################################\n\n");
     printf(" Example 5 \n\n");
     printf(" Point-to-point Communication: MPI_Send MPI_Recv \n\n");
     printf(" Vector size: %d\n",buffsize);
     printf(" Number of processes: %d\n\n",ntasks);
     printf("##########################################################\n\n");
     printf("                --> BEFORE COMMUNICATION <--\n\n");
   }
 
   /*=============================================================*/
   /* Memory allocation.                                          */ 
 
   sendbuff=(double *)malloc(sizeof(double)*buffsize);
   recvbuff=(double *)malloc(sizeof(double)*buffsize);
 
   /*=============================================================*/
   /* Vectors and matrices initialisation.                      */
   srand((unsigned)time( NULL ) + taskid);
   for(i=0;i<buffsize;i++){
     sendbuff[i]=(double)rand()/RAND_MAX;
   }
 
   /*==============================================================*/
   /* Print out before communication.                              */
 
   sendbuffsum=0.0;
   for(i=0;i<buffsize;i++){
     sendbuffsum += sendbuff[i];
   }   
   ierr=MPI_Gather(&sendbuffsum,1,MPI_DOUBLE,
                   sendbuffsums,1, MPI_DOUBLE,
                   0,MPI_COMM_WORLD);
 
   if(taskid==0){
     for(itask=0;itask<ntasks;itask++){
       recvtaskid=itask+1;
       if(itask==(ntasks-1))recvtaskid=0;
       printf("Process %d: Sum of vector sent to %d = %e\n",
               itask,recvtaskid,sendbuffsums[itask]);
     }  
   }
 
   /*===============================================================*/
   /* Communication.                                                */
 
   inittime = MPI_Wtime();
 
   if ( taskid == 0 ){
     ierr=MPI_Recv(recvbuff,buffsize,MPI_DOUBLE,
	           ntasks-1,MPI_ANY_TAG,MPI_COMM_WORLD,&status);
     recvtime = MPI_Wtime();
     ierr=MPI_Send(sendbuff,buffsize,MPI_DOUBLE,
	           taskid+1,0,MPI_COMM_WORLD);   
   }
   else if( taskid == ntasks-1 ){
     ierr=MPI_Send(sendbuff,buffsize,MPI_DOUBLE,
	           0,0,MPI_COMM_WORLD);   
     ierr=MPI_Recv(recvbuff,buffsize,MPI_DOUBLE,
	           taskid-1,MPI_ANY_TAG,MPI_COMM_WORLD,&status);
     recvtime = MPI_Wtime();
   }
   else{
     ierr=MPI_Recv(recvbuff,buffsize,MPI_DOUBLE,
	           taskid-1,MPI_ANY_TAG,MPI_COMM_WORLD,&status);
     recvtime = MPI_Wtime();
     ierr=MPI_Send(sendbuff,buffsize,MPI_DOUBLE,
	           taskid+1,0,MPI_COMM_WORLD);
   }
 
   MPI_Barrier(MPI_COMM_WORLD);
 
   totaltime = MPI_Wtime() - inittime;
 
   /*===============================================================*/
   /* Print out after communication.                                */
 
   recvbuffsum=0.0;
   for(i=0;i<buffsize;i++){
     recvbuffsum += recvbuff[i];
   }   
 
   ierr=MPI_Gather(&recvbuffsum,1,MPI_DOUBLE,
                   recvbuffsums,1, MPI_DOUBLE,
                   0,MPI_COMM_WORLD);
 
   ierr=MPI_Gather(&recvtime,1,MPI_DOUBLE,
                   recvtimes,1, MPI_DOUBLE,
                   0,MPI_COMM_WORLD);
 
   if(taskid==0){
     printf("\n");
     printf("##########################################################\n\n");
     printf("                --> AFTER COMMUNICATION <-- \n\n");
     for(itask=0;itask<ntasks;itask++){
       printf("Process %d: Sum of received vector= %e: Time=%f seconds\n",
               itask,recvbuffsums[itask],recvtimes[itask]);
     }
     printf("\n");
     printf("##########################################################\n\n");
     printf(" Communication time: %f seconds\n\n",totaltime);  
     printf("##########################################################\n\n");
   }
 
   /*===============================================================*/
   /* Free the allocated memory.                                    */
   free(recvbuff);
   free(sendbuff);
 
   /*===============================================================*/
   /* MPI finalisation.                                             */
   MPI_Finalize();
 
}


Outils personnels
Espaces de noms

Variantes
Actions
Navigation
Ressources de Calcul Québec
Outils
Partager