1 <!--#set var="root_directory" value="../.." --><!--#include virtual="$root_directory/shared/header.shtml"-->
3 <h1>Elementary MPI</h1>
5 <!--TableOfContents:Begin-->
6 <!--TableOfContents:End-->
8 <h2 id="what">What is MPI?</h2>
10 <p>MPI consists of a library of routines with Fortran, C, and C++
11 bindings to facilitate the coding of parallel codes in a distributed
12 memory MIMD environment. In addition it provides a run-time
13 environment to build and execute the parallel codes. It comes with
14 implementations on most popular platforms, including
15 the <em>beowulf</em> type systems.</p>
17 <p>MPICH is a full implementation of MPI in a platform independent
18 way. The MPICH implementation of MPI is what we will use in this
21 <p>MPI maintains internal data-structures related to the
22 administrations of the parallel tasks and to allow internal
23 communications between the tasks and the environment. The latter are
24 referred to as <em>handles</em>.</p>
26 <p>The C MPI routines return an <code>int</code> and the Fortran
27 routines return an integer <code>ierror</code> in the calls which
28 contain the error status of the calls. Error detection and debugging
29 of parallel codes are difficult in general, and MPI can only attempt
32 <h2 id="Binding">Binding to Fortran and C</h2>
34 <p>All MPI routines are labeled MPI_Name, with the first letter of the
37 <p>The C calls syntax is</p>
39 int MPI_Name( <argument list> )
41 <p>with an error code returned as an <code>int</code> upon execution.
42 The header file is <code>mpi.h</code>. The MPI provided constants are
43 all capitalized, i.e., <code>MPI_COMM_WORLD</code>.</p>
45 <p>Fortran calls follow the syntax</p>
47 call MPI_Name ( <argument list>, ierror )
49 <p>with the error code being an integer type. The header file is
50 <code>mpif.h</code>. The MPI provided constants are all captialized,
51 i.e., <code>MPI_COMM_WORLD</code>. However, note that Fortran is case
52 insensitive, and that consequently the use of capital letters is for
55 <h2 id="init">MPI Initialization</h2>
58 int MPI_Init( int *argc, char ***argv)
61 <p>Initializes the executables on the nodes to the MPI handling
62 daemons. It creates a communicator <code>MPI_COMM_WORLD</code> which
63 encompasses all the initialized processes on the various nodes and
64 give a unique identification to each process, the rank number. The
65 rank and size of the communicator can be obtained via MPI query
68 <h2 id="who">Who Am I?</h2>
71 int MPI_Comm_size( MPI_Comm comm, int *size)
72 int MPI_Comm_rank( MPI_Comm comm, int *rank)
75 <p>Returns respectively the size of the specified communicator and the
76 rank (process number) of the current process within the specified
77 communicator. The rank number goes between 0
78 and <code>size-1</code>.</p>
80 <h2 id="final">Finalizing an MPI code</h2>
86 <p>Must be the last call to MPI in any MPI code. No other call to MPI
87 routines can be executed after it. Note that a new communicator can
88 not be invoked after <code>MPI_finalize()</code>.</p>
90 <h2 id="CL">Compiling & Linking an MPI code</h2>
93 mpicc <name.c> -o <name>
96 <p>Compiles a C code <code>name.c</code>, produces the object under
97 <code>name.o</code>, and links it with the MPI
98 library. The <code>-o</code> option is to specify the executable name,
99 the default being <code>a.out</code>. This command calls the default C
100 compiler, usualy the GNU gcc compiler on linux based
104 mpif77 <name.f> -o <name>
107 <p>Compiles a Fortran code <code>name.f</code>, produces the object
108 under <code>name.o</code>, and links it with the MPI
109 library. The <code>-o</code> option is to specify the executable name,
110 the default being <code>a.out</code>. This command calls the default
111 Fortran compiler, usualy the GNU f77 compiler on linux based
114 <h2 id="running">Running an MPI code</h2>
116 <p>MPI2 provides the <code>mpd</code> subset of commands to build a
117 virtual parallel machine, a communication ring. These were described
118 in the previous section on <a href="../MPI2/#ring">Communication
119 Ring</a> under MPI2.</p>
121 <p>Once a communication ring has been established, the command
122 <code>mpiexec</code> launches an executable on a specified number of
126 mpiexec -np <number-of-processes> <executable>
128 <p>will start the executable code <code>executable</code> on
129 <code><number-of-processes></code> nodes.</p>
131 <p>MPI2 will choose the nodes automatically, starting from the current
132 one (launching node) and selecting from the list of available
133 processors in the communication
134 ring. If <code><number-of-processes></code> is larger than the
135 number of nodes included in the communication ring, the executable
136 will be launched more than once in some of the nodes chosen in a
139 <h2 id="examples">Examples</h2>
141 Check out the <a href="../../src/hello_MPI/">hello_MPI</a> package for
142 some simple MPI examples.
144 <!--#include virtual="$root_directory/shared/footer.shtml"-->