MPI Hands-On; Collective Communications II

  1. Different datatypes with a single MPI broadcast, A program http://siber.cankaya.edu.tr/ozdogan/ParallelComputing/cfiles/code14.c code14.c that broadcast routine is used to communicate different datatypes with a single MPI broadcast (MPI_Bcast) call.
  2. A SPMD program using broadcast and non-blocking receive. The http://siber.cankaya.edu.tr/ozdogan/ParallelComputing/cfiles/code16.c program consists of one sender process and up to 7 receiver processes.
  3. A SPMD program that uses MPI_Scatter. The http://siber.cankaya.edu.tr/ozdogan/ParallelComputing/cfiles/code17.c program should be run with an even number of processes.
  4. A SPMD program that uses MPI_Gather. The http://siber.cankaya.edu.tr/ozdogan/ParallelComputing/cfiles/code18.c program should be run with an even number of processes.
  5. Timing comparison of processes and thread creation. Comparing timing results for the fork() subroutine and the pthreads_create() subroutine. http://siber.cankaya.edu.tr/ozdogan/ParallelComputing/cfiles/code39.c code39.c, http://siber.cankaya.edu.tr/ozdogan/ParallelComputing/cfiles/code40.c code40.c

Cem Ozdogan 2010-11-22