Lecture 4
Learning objectives
After this class, you should be able to:
- Explain the message passing programming model using MPI for communication.
- Given a problem, write parallel code to solve it, and evaluate the performance of your code, using the following MPI functions: (i)
MPI_Init
, (ii)MPI_Comm_rank
, (iii)MPI_Comm_size
, (iv)MPI_Reduce
, (v)MPI_Finalize
, (vi)MPI_Barrier
, (vii)MPI_Wtime
, and (viii)MPI_Wtick
, (viii)MPI_Send
, (ix)MPI_Recv
, (x)MPI_ISend
, (xi)MPI_IRecv
, and (xii)MPI_Wait
.
Reading assignment
- MPI tutorial at: https://computing.llnl.gov/tutorials/mpi.
- A chapter on MPI: http://www.mcs.anl.gov/~itf/dbpp/text/node94.html.
- Reference -- MPI specifications: http://www.mcs.anl.gov/research/projects/mpi.
Exercises and review questions
- Exercises and review questions on current lecture's material
- Write a program to compute the dot product of two arrays in parallel, using MPI. The arrays should be distributed across all the processes (that is, each process should contain data only for a portion of each array). In the code below,
c
is the dot product ofa
andb
.
c = 0;
for (i=0; i < n; i++)
c += a[i]*b[i];
- Write an MPI program in which each process with rank not equal to zero sends the following message to the process with rank zero:
Message from process n
, wheren
is the rank of the process sending the message. The process with rank zero should receive each message and output it.- Write an MPI program that will run on two processors. Each process will exchange an array of
1000
integers with its neighbor usingIsend/Irecv/Wait
. Before callingWait
, please have each processsleep
for2
seconds. In a real program, you would not have the processes sleep. Instead, you would have them do some useful computation.- Preparation for the next lecture
- None.
Last modified: 14 Jan 2010