MPI is discussed in Slides from lectures starting with Lecture 18.
MPI stands for Message Passing Interface and is a standard approach for programming distributed memory machines such as clusters, supercomputers, or heterogeneous networks of computers. It can also be used on a single shared memory computer, although it is often more cumbersome to program in MPI than in OpenMP.
A number of different implementations are available (open source and vendor supplied for specific machines). See this list, for example.
The VM has open-mpi installed.
The libraries necessary to compile MPI programs are already installed on the class VM.
You should be able to do the following:
$ cd $CLASSHG/codes/mpi
$ mpif90 test1.f90
$ mpiexec -n 4 a.out
and see output like:
Hello from Process number 1 of 4 processes
Hello from Process number 3 of 4 processes
Hello from Process number 0 of 4 processes
Hello from Process number 2 of 4 processes
For hints on installation on a Mac, see Installing MPI on a Mac.
There are a few sample codes in the $CLASSHG/codes/mpi directory.
See Jacobi iteration using MPI.
See also the samples in the list below.
- Argonne tutorials
- Tutorial slides by Bill Gropp
- Livermore tutorials
- Open MPI
- The MPI Standard
- Some sample codes
- LAM MPI tutorials
- Google “MPI tutorial” to find more.