From cd77aff2fb99ec1345e5f708fd6fb3b404582963 Mon Sep 17 00:00:00 2001 From: "W. Trevor King" Date: Tue, 14 Sep 2010 21:36:03 -0400 Subject: [PATCH] -> and similar in Elementary_MPI/index.shtml. --- content/Elementary_MPI/index.shtml | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/Elementary_MPI/index.shtml b/content/Elementary_MPI/index.shtml index 025a61e..6dc4f26 100644 --- a/content/Elementary_MPI/index.shtml +++ b/content/Elementary_MPI/index.shtml @@ -12,7 +12,7 @@ bindings to facilitate the coding of parallel codes in a distributed memory MIMD environment. In addition it provides a run-time environment to build and execute the parallel codes. It comes with implementations on most popular platforms, including -the beowulf type systems.

+the beowulf type systems.

MPICH is a full implementation of MPI in a platform independent way. The MPICH implementation of MPI is what we will use in this @@ -21,7 +21,7 @@ course.

MPI maintains internal data-structures related to the administrations of the parallel tasks and to allow internal communications between the tasks and the environment. The latter are -referred to as handles.

+referred to as handles.

The C MPI routines return an int and the Fortran routines return an integer ierror in the calls which @@ -74,8 +74,8 @@ int MPI_Comm_rank( MPI_Comm comm, int *rank)

Returns respectively the size of the specified communicator and the rank (process number) of the current process within the specified -communicator. The rank number goes between 0 -and size-1.

+communicator. The rank number goes between 0 +and size-1.

Finalizing an MPI code

-- 2.26.2