Request

De Wiki de Calcul Québec
Aller à : Navigation, rechercher
Cette page est une traduction de la page Request et la traduction est complétée à 100 % et à jour.

Autres langues :anglais 100% • ‎français 100%

For some applications it is difficult to obtain a balanced workload on all processors. When the number of processors is sufficiently high and tasks can easily be subdivided into multiple small tasks, it is interesting to use a master-slave approach.

We show how to use this method for a case where we study a large number of particles. The total volume containing the particles is initially subdivided into a large number of small volume elements. In this application, we assume that each processor must randomly generate the particles' position within those small volumes.

Note that, within the master-slave approach, the notion of a request (type MPI_Request) is important. Those objects are created by the master processor when it does a search for a message that can come from any other processor. They can hence be checked as many times as desired. They can also be destroyed by the master process.

This example shows how to use MPI_Irecv and MPI_Wait in MPI for asynchronous communication. The processor of rank 0 is called the master processor while the other processors are the slaves. Such an approach is useful when it is hard to assure a well-balanced load. The total number of tasks has to be very large. As soon as a slave processor is done with a task, it contacts the master to receive a new task.


File : request.c
/*--------------------------------------------------------
 Author: Steve Allen
         Centre de calcul scientifique
         Université de Sherbrooke
 
 Last update: October 2007
--------------------------------------------------------*/
 
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"
 
int main(int argc,char** argv)
{
   int         master = 0;
   int         finish_flag = -1;
   int         myrank, nprocs;
   int         nvol, nptc, totptc;
   int         i, nfinish, *buf_recv, nslaves, last;
   float       x, y, z;
   MPI_Request *request;
   MPI_Status  status;
 
   /* Initialisation*/
   MPI_Init(&argc, &argv);
   MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
   MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
 
   /* The number of cpus have to be at least 2 */
   if( nprocs < 2 )
   {
       printf("%d cpus is not enough for this example\n", nprocs);
       MPI_Finalize();
       exit( 1 );
    }
 
   nvol = 0;
   if (myrank == master){
 
      /* Operations for the master */
      if( argc > 1 )
      {
          nvol = atoi( argv[1] );
      }
      if( nvol <= 0 ) nvol = 6000;
      printf("Generating nvol=%d with %d procs\n", nvol, nprocs-1);
 
      nslaves = nprocs - 1;
      buf_recv = (int*) malloc( nslaves * sizeof(int) );
      request = (MPI_Request*) malloc( nslaves * sizeof(MPI_Request) );
      last = 0;
      nfinish = 0;
 
      /* Send a message indicating the start of work */
      for( i = 0; i < nslaves; i++ )
      {
         if( last < nvol )
         {
            last++;
            MPI_Send( &last, 1, MPI_INT, i+1, 0, MPI_COMM_WORLD );
            MPI_Irecv( &(buf_recv[i]), 1, MPI_INT, i+1, i+1, MPI_COMM_WORLD, &(request[i]) );
         }
         else
         {
            MPI_Send( &finish_flag, 1, MPI_INT, i+1, 0, MPI_COMM_WORLD );
            nfinish++;
         }
      }
      while( nfinish < nslaves )
      {
         /* Waiting for a message coming from the slave cpus */
         MPI_Waitany( nslaves, request, &i, &status );
         if( last < nvol )
         {
            /* If there are still tasks to be accomplished */
            last++;
            MPI_Send( &last, 1, MPI_INT, i+1, 0, MPI_COMM_WORLD );
            MPI_Irecv( &(buf_recv[i]), 1, MPI_INT, i+1, i+1, MPI_COMM_WORLD, &(request[i]) );
         }
         else
         {
            /* If every task has finished */
            MPI_Send( &finish_flag, 1, MPI_INT, i+1, 0, MPI_COMM_WORLD );
            nfinish++;
         }
      }
      free( request );
      free( buf_recv );
 
   } else {
 
      /* Tasks for the slaves */
      totptc = 0;
      nvol = 0;
      buf_recv = (int*) malloc( sizeof(int) );
      MPI_Recv( buf_recv, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status );
 
      /* Until the tasks are not all done */
      while( *buf_recv != finish_flag )
      {
         /* Accomplishing a task : Generating 3D positions */
         nptc = (int) ( ( (float) rand() ) / RAND_MAX * 2500 ) + 2500;
         for( i = 0; i < nptc; i++ )
         {
            x = ( (float) rand() ) / RAND_MAX;
            y = ( (float) rand() ) / RAND_MAX;
            z = ( (float) rand() ) / RAND_MAX;
         }
         totptc += nptc;
         nvol++;
         MPI_Send( &nptc, 1, MPI_INT, 0, myrank, MPI_COMM_WORLD );
         MPI_Recv( buf_recv, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status );
      }
      free( buf_recv );
 
      printf("Generated %d vol with a total of %d ptcs\n", nvol, totptc);
   }
 
   MPI_Barrier( MPI_COMM_WORLD );
   MPI_Finalize();
}


Outils personnels
Espaces de noms

Variantes
Actions
Navigation
Ressources de Calcul Québec
Outils
Partager