From: W. Trevor King Date: Tue, 14 Sep 2010 15:03:55 +0000 (-0400) Subject: Convert tree_broadcast.pdf assigment to XHTML. X-Git-Url: http://git.tremily.us/?a=commitdiff_plain;h=9b4d7bd790f15135a470a22ee2c6b6a93c2a3f8f;p=parallel_computing.git Convert tree_broadcast.pdf assigment to XHTML. --- diff --git a/assignments/archive/tree_broadcast.pdf/assigment.pdf b/assignments/archive/tree_broadcast.pdf/assigment.pdf deleted file mode 100644 index 5cdd758..0000000 Binary files a/assignments/archive/tree_broadcast.pdf/assigment.pdf and /dev/null differ diff --git a/assignments/archive/tree_broadcast/index.shtml b/assignments/archive/tree_broadcast/index.shtml index 81b8e1b..ea57181 100644 --- a/assignments/archive/tree_broadcast/index.shtml +++ b/assignments/archive/tree_broadcast/index.shtml @@ -1,83 +1,61 @@ -

Assignment #3

-

Due Friday, October 26, 2007

+

Assignment #4

+

Due Friday, October 23, 2009

Purpose

-

To learn how to implement truly parallel algorithms.

+

Learn how to implement a truly parallel algorithm.

-

Setup

+

Note: Please identify all your work.

This assignment consists in building your own broadcast code based -on a binary tree. The code print_tree.c +on a binary tree. The code +print_tree.c provides a simple implementation of a binary tree based on node zero as being the root. It generates the node numbers from which any node is to receive the information and to which to send this information.

-

To do

+

Part A

-

Part A

+

Write a function My_Bcast() that performs a broadcast +similar as the one provided by MPICH2. It should follow the same +syntax as MPI_Bcast(), namely

-
    -
  1. Build a broadcast function based on the skeleton code - print_tree.c.
  2. -
  3. Call this function my_broadcast() with a calling - sequence -
    int my_broadcast( int my_rank, int size, int root, double *msg, int count )
    - In other words, restrict it to only double data type for - simplicity and to root pointing to node 0 only (via - an if ( )).
  4. -
  5. Move the inner parts of the loop - in print_tree.c over to a function - my_broadcast (still in a serial code).
  6. -
  7. Use an integer array target[] created by - malloc() to store the list of to target - nodes.
  8. -
  9. Include the MPI administrative calls - (MPI_Init(), - MPI_Comm_size(), - MPI_Comm_rank(), - MPI_Finalize()), and use the actual size and - rank of the virtual machine instead for the loop - over rank.
  10. -
  11. Implement the send and receive routines (MPI_Recv() and - MPI_Ssend()) to receive and transmit the message.
  12. -
  13. Put printf() statements in the main program to - check that the code is working correctly.
  14. -
+
+int My Bcast(void *buffer, int count, MPI Datatype datatype, int root,
+             MPI Comm comm)
+
-

Part B

+

This function should be based on the skeleton +code print_tree.c. It should therefore assume that root +is process zero (0). If not, it should print out an error message +quoting this fact and return an exit code one (1).

-

Use MPI_Wtime() to time three ways of "broadcasting" a -message:

-
    -
  1. Using a for loop over the nodes to send the message - from the root node to the other nodes.
  2. -
  3. Using my_broadcast().
  4. -
  5. Using MPI_Bcast().
  6. -
+

Test your routine carefully for various process numbers and +different count and datatype in the call.

-

To do so, send a message of some length (at least 1000, 5000 and -10000 double — filled with random numbers) over and over again -100 times (using a for loop) to have good time -statistics. Print the average of the times for each broadcast -methods.

+

Part B

-

Part C

+

Instrument your code for timing measurements. Modify your main code +to broadcast the information three (3) different ways:

-

Modify your function my_broadcast() to broadcast from -an arbitrary root node, namely

    -
  1. Modify the algorithm (explain what you will implement).
  2. -
  3. Modify print_tree.c accordingly.
  4. -
  5. Modify my_broadcast() accordingly.
  6. -
  7. Fully check that my_broadcast() works properly by - inserting appropriate printf() in the main program and - running the code for different number of processes and - root.
  8. -
  9. Repeat the timing study in part B for this new function.
  10. +
  11. sequentially via a for loop,
  12. +
  13. via your My_Bcast(), and
  14. +
  15. via the MPICH2 MPI_Bcast().
+

Broadcast 10,000 integers 100 times and take an average over the +100 broadcasts to smooth out the time fluctuations. Repeat for 10,000 +double. Tabulate your timings and comment on the results.

+ +

Part C

+ +

Modify My_Bcast() to be able to broadcast from any +process (the root process) within the virtual machine. Check your code +carefully for different root processes and various number of +processes.

+ diff --git a/assignments/archive/tree_broadcast/index.shtml.2007.10.shtml b/assignments/archive/tree_broadcast/index.shtml.2007.10.shtml new file mode 100644 index 0000000..81b8e1b --- /dev/null +++ b/assignments/archive/tree_broadcast/index.shtml.2007.10.shtml @@ -0,0 +1,83 @@ + + +

Assignment #3

+

Due Friday, October 26, 2007

+ +

Purpose

+ +

To learn how to implement truly parallel algorithms.

+ +

Setup

+ +

This assignment consists in building your own broadcast code based +on a binary tree. The code print_tree.c +provides a simple implementation of a binary tree based on node zero +as being the root. It generates the node numbers from which any node +is to receive the information and to which to send this +information.

+ +

To do

+ +

Part A

+ +
    +
  1. Build a broadcast function based on the skeleton code + print_tree.c.
  2. +
  3. Call this function my_broadcast() with a calling + sequence +
    int my_broadcast( int my_rank, int size, int root, double *msg, int count )
    + In other words, restrict it to only double data type for + simplicity and to root pointing to node 0 only (via + an if ( )).
  4. +
  5. Move the inner parts of the loop + in print_tree.c over to a function + my_broadcast (still in a serial code).
  6. +
  7. Use an integer array target[] created by + malloc() to store the list of to target + nodes.
  8. +
  9. Include the MPI administrative calls + (MPI_Init(), + MPI_Comm_size(), + MPI_Comm_rank(), + MPI_Finalize()), and use the actual size and + rank of the virtual machine instead for the loop + over rank.
  10. +
  11. Implement the send and receive routines (MPI_Recv() and + MPI_Ssend()) to receive and transmit the message.
  12. +
  13. Put printf() statements in the main program to + check that the code is working correctly.
  14. +
+ +

Part B

+ +

Use MPI_Wtime() to time three ways of "broadcasting" a +message:

+
    +
  1. Using a for loop over the nodes to send the message + from the root node to the other nodes.
  2. +
  3. Using my_broadcast().
  4. +
  5. Using MPI_Bcast().
  6. +
+ +

To do so, send a message of some length (at least 1000, 5000 and +10000 double — filled with random numbers) over and over again +100 times (using a for loop) to have good time +statistics. Print the average of the times for each broadcast +methods.

+ +

Part C

+ +

Modify your function my_broadcast() to broadcast from +an arbitrary root node, namely

+
    +
  1. Modify the algorithm (explain what you will implement).
  2. +
  3. Modify print_tree.c accordingly.
  4. +
  5. Modify my_broadcast() accordingly.
  6. +
  7. Fully check that my_broadcast() works properly by + inserting appropriate printf() in the main program and + running the code for different number of processes and + root.
  8. +
  9. Repeat the timing study in part B for this new function.
  10. +
+ + diff --git a/assignments/current/4 b/assignments/current/4 index 80de2c0..840a651 120000 --- a/assignments/current/4 +++ b/assignments/current/4 @@ -1 +1 @@ -../archive/tree_broadcast.pdf/ \ No newline at end of file +../archive/tree_broadcast/ \ No newline at end of file diff --git a/assignments/index.shtml b/assignments/index.shtml index eb057db..bb3ab1a 100644 --- a/assignments/index.shtml +++ b/assignments/index.shtml @@ -6,7 +6,7 @@
  • Assignment 1
  • Assignment 2
  • Assignment 3
  • -
  • Assignment 4
  • +
  • Assignment 4
  • Assignment 5
  • Assignment 6
  • Assignment 7