+++ /dev/null
-<!--#set var="root_directory" value="../.." --><!--#include virtual="$root_directory/shared/header.shtml"-->
-
-<h1>Monte Carlo Techniques</h1>
-
-<!--TableOfContents:Begin-->
-<!--TableOfContents:End-->
-
-<p>The Monte Carlo technique to perform integrals applies best to
-higher dimensional integrals for which other traditional methods
-become difficult. For instance, using few hundred points to perform a
-1-D integral using the trapezoidal or Simpson method is easily done.
-However, the same in multi-dimension does not apply: a five
-dimensional integral</p>
-
-<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
- <!--annotation encoding="LaTeX">
- I = \int dx_1 \int dx_2 \int dx_3 \int dx_4 \int dx_5 f(x_1,x_2,x_3,x_4,x_5)
- </annotation-->
- <mrow><mi>I</mi><mo>=</mo><mo stretchy="false">∫</mo><mi>d</mi><msub><mi>x</mi><mn>1</mn></msub><mo stretchy="false">∫</mo><mi>d</mi><msub><mi>x</mi><mn>2</mn></msub><mo stretchy="false">∫</mo><mi>d</mi><msub><mi>x</mi><mn>3</mn></msub><mo stretchy="false">∫</mo><mi>d</mi><msub><mi>x</mi><mn>4</mn></msub><mo stretchy="false">∫</mo><mi>d</mi><msub><mi>x</mi><mn>5</mn></msub><mi>f</mi><mo stretchy="false">(</mo><msub><mi>x</mi><mn>1</mn></msub><mo>,</mo><msub><mi>x</mi><mn>2</mn></msub><mo>,</mo><msub><mi>x</mi><mn>3</mn></msub><mo>,</mo><msub><mi>x</mi><mn>4</mn></msub><mo>,</mo><msub><mi>x</mi><mn>5</mn></msub><mo stretchy="false">)</mo></mrow>
-</math>
-
-<p>would require 200<sup>5</sup> function evaluations to implement a
-method with 200 points per axis. Statistical Physics requires
-integrals of much higher dimensions, often of the order of the number
-of particles.</p>
-
-<p>The Monte Carlo approach deals with this problem by sampling the
-integrand via statistical means. The method calls for a random walk,
-or a guided random walk, in phase space during which the integrand is
-evaluated at each step and averaged over.</p>
-
-<p>This problem is presented here as a challenging case to implement
-via a parallel algorithm. Different implementations will require us
-to use all the MPI functions we have seen so far. In the following,
-the Monte Carlo method will first be explained, then it will be
-implemented serially, and then the Metropolis algorithm will be
-explained, and implemented serially and in parallel.</p>
-
-<h2 id="MCI">Monte Carlo Integral</h2>
-
-<p>The Monte Carlo Method is best explained via a 1-D integral</p>
-
-<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
- <!--annotation encoding="LaTeX">
- I = \int_a^b f(x) dx
- </annotation-->
- <mrow><mi>I</mi><mo>=</mo><msubsup><mo stretchy="false">∫</mo><mi>a</mi><mi>b</mi></msubsup><mi>f</mi><mo stretchy="false">(</mo><mi>x</mi><mo stretchy="false">)</mo><mi>d</mi><mi>x</mi></mrow>
-</math>
-
-<p>Rewrite</p>
-
-<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
- <!--annotation encoding="LaTeX">
- I = (b-a) \langle f(x) \rangle
- </annotation-->
- <mrow><mi>I</mi><mo>=</mo><mo stretchy="false">(</mo><mi>b</mi><mo>-</mo><mi>a</mi><mo stretchy="false">)</mo><mo stretchy="false">〈</mo><mi>f</mi><mo stretchy="false">(</mo><mi>x</mi><mo stretchy="false">)</mo><mo stretchy="false">〉</mo></mrow>
-</math>
-
-<p>where <i>〈f(x)〉</i> is the average of <i>f(x)</i> over the
-interval <i>(a,b)</i>.</p>
-
-<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
- <!--annotation encoding="LaTeX">
- \langle f(x) \rangle = \frac{1}{N} \sum_{i=1}^{N} f(x_i)
- </annotation-->
- <mrow><mo stretchy="false">〈</mo><mi>f</mi><mo stretchy="false">(</mo><mi>x</mi><mo stretchy="false">)</mo><mo stretchy="false">〉</mo><mo>=</mo><mfrac><mn>1</mn><mi>N</mi></mfrac><msubsup><mo stretchy="false">∑</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>N</mi></msubsup><mi>f</mi><mo stretchy="false">(</mo><msub><mi>x</mi><mi>i</mi></msub><mo stretchy="false">)</mo></mrow>
-</math>
-
-<p>The Monte Carlo method consists in choosing the sampling points
-<i>x<sub>i</sub></i> at random in the interval <i>(a,b)</i>.</p>
-
-<img border="0" src="img/integral_def.png" width="301" height="126" />
-
-<p>The advantage over other methods in higher dimension is that better
-coverage of the function can be achieved with fewer points in phase
-space.</p>
-
-<p>The error in evaluating <i>〈f(x)〉</i> is described by the
-variance</p>
-
-<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
- <!--annotation encoding="LaTeX">
- \sigma^2 = \frac{1}{N} \left[ \langle f^2 \rangle - \langle f \rangle^2 \right]
- </annotation-->
- <mrow><msup><mi>σ</mi><mn>2</mn></msup><mo>=</mo><mfrac><mn>1</mn><mi>N</mi></mfrac><mrow><mo stretchy="true">[</mo><mrow><mo stretchy="false">〈</mo><msup><mi>f</mi><mn>2</mn></msup><mo stretchy="false">〉</mo><mo>-</mo><mo stretchy="false">〈</mo><mi>f</mi><msup><mo stretchy="false">〉</mo><mn>2</mn></msup></mrow><mo stretchy="true">]</mo></mrow></mrow>
-</math>
-
-<h2 id="pi">Calculation of π</h2>
-
-<p>The method is illustrated by integrating <i>f(x,y)=1</i> in a
-circle, namely</p>
-
-<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
- <!--annotation encoding="LaTeX">
- I = \int_{-1}^{1} dx \int_{-1}^{1} dy \Theta( 1 - x^2 -y^2 )
- </annotation-->
- <mrow><mi>I</mi><mo>=</mo><msubsup><mo stretchy="false">∫</mo><mrow><mo>-</mo><mn>1</mn></mrow><mn>1</mn></msubsup><mi>d</mi><mi>x</mi><msubsup><mo stretchy="false">∫</mo><mrow><mo>-</mo><mn>1</mn></mrow><mn>1</mn></msubsup><mi>d</mi><mi>y</mi><mi mathvariant="normal">Θ</mi><mo stretchy="false">(</mo><mn>1</mn><mo>-</mo><msup><mi>x</mi><mn>2</mn></msup><mo>-</mo><msup><mi>y</mi><mn>2</mn></msup><mo stretchy="false">)</mo></mrow>
-</math>
-
-<p>This corresponds to</p>
-
-<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
- < !--annotation encoding="LaTeX">
- I = \int_{-1}^{1} dx \int_{-1}^{1} dy
- \begin{cases}
- 1 & \text{if $x^2+y^2 \leq 1$} \\
- 0 & \text{if $x^2+y^2 \gt 1}
- \end{cases}
-
- I = \int_{-1}^{1} dx \int_{-1}^{1} dy
- \left[ \begin{array}{ll}
- 1 & if x^2+y^2 \leq 1 \\
- 0 & if x^2+y^2 > 1
- \end{array} \right.
- </annotation-->
- <mrow><mi>I</mi><mo>=</mo><msubsup><mo stretchy="false">∫</mo><mrow><mo>-</mo><mn>1</mn></mrow><mn>1</mn></msubsup><mi>d</mi><mi>x</mi><msubsup><mo stretchy="false">∫</mo><mrow><mo>-</mo><mn>1</mn></mrow><mn>1</mn></msubsup><mi>d</mi><mi>y</mi><mrow><mo stretchy="true">[</mo><mrow><mstyle scriptlevel="0"><mtable><mtr><mtd><mn>1</mn></mtd><mtd><mi>i</mi><mi>f</mi><msup><mi>x</mi><mn>2</mn></msup><mo>+</mo><msup><mi>y</mi><mn>2</mn></msup><mo>≤</mo><mn>1</mn><mspace width="0.333em"/><mn>0</mn></mtd><mtd><mi>i</mi><mi>f</mi><msup><mi>x</mi><mn>2</mn></msup><mo>+</mo><msup><mi>y</mi><mn>2</mn></msup><mo>></mo><mn>1</mn></mtd></mtr></mtable></mstyle></mrow></mrow></mrow>
-</math>
-
-<p>< ! - - MATH: $
-<img width="272" height="54" align="middle" border="0" src=
-"Monte_Images/img6.gif" alt=
-"$ I =\int_{-1}^{1} dx \int_{-1}^{1} dy \left[ \begin{array}{ll} 1 & if x^2+y^2 \leq 1 \ 0 & if x^2+y^2 > 1 \end{array} \right. $" /><br />
-
-The Monte Carlo approach is implemented by enclosing the circle
-in a square of side 2 and using random numbers in the interval
-<b>x = (-1,1)</b> and <b>y = (-1,1)</b>. The area of the circle
-is π 1<sup>2</sup> and the area of the square is
-2<sup>2</sup>. The ratio of these areas is then π/4.</p>
-
-<p>Random numbers, taken as x - y coordinates of points in a
-plane evidently fill in the plane uniformly. Therefore, the
-numbers of random numbers that fall within the circle and the
-square are proportional to their areas. Therefore</p>
-
-<p> <b>π = 4 # hits in circle /
-# random numbers</b></p>
-
-<p>The program <a href="codes/random_demo.c">random_demo.c</a>
-generates x and y coordinates in the interval (0,1) via calls to
-RAND( ), the build-in pseudo random-number generator function in
-the "C" compiler. The code is serial and can run on any
-computer.</p>
-
-<p><i>random_demo 2000 > data_file</i></p>
-
-<p>runs the random_demo to produce 2000 random points which are
-piped in a file <em>data_file</em>. The latter can be plotted
-using <em>gnuplot</em>. It produces the following graph.</p>
-
-<p><img border="0" src="Monte_Images/1-series.jpg" width="215"
-height="230" /> <img border="0" src=
-"Monte_Images/2001_04_22_151701_shot.jpg" width="257" height=
-"265" /><br />
-The program <i><a href=
-"codes/monte_pi_serial.c">monte_pi_serial.c</a></i> calculates
-π by the Monte-Carlo method. It uses random numbers in the
-interval [0,1].</p>
-
-<p>The error in calculating π follows from the standard
-deviation calculation in the following way:</p>
-
-<p>< ! - - MATH: $\sigma^2 = \frac{1}{N} [ < f^2 > - < f >^2 ]$ - - >
-<img width="187" height="34" align="middle" border="0" src=
-"Monte_Images/img20.gif" alt=
-"$ \sigma^2 = \frac{1}{N} [ < f^2 > - < f >^2 ] $" /><br />
-
-<img width="246" height="44" align="middle" border="0" src=
-"Monte_Images/img21.gif" alt=
-"$ \sigma^2 = \frac{1}{N} \left[ \frac{1}{N} \sum_{i=1}^{N} 1 - ( \frac{1}{N} \sum_{i=1}^{N} 1 )^2 \right] $" />
-<br />
- <img width="101" height="39" align="middle" border="0"
-src="Monte_Images/img22.gif" alt=
-"$ \pi = \frac{1}{N} \sum_{i=1}^{N} 1 $" /><br />
- <img width="108" height="33" align="middle" border="0"
-src="Monte_Images/img23.gif" alt=
-"$ \sigma^2 = \frac{\pi}{N} \left( 1 - \pi \right) $" /></p>
-<hr />
- <a href="#Monte_Carlo_Techniques"><img border="0" src=
-"../GIF_files_PHYS405/arw-dkblue-up.gif" width="23" height=
-"23" /> </a><b>Back to top of page</b>
-<hr />
-
-<h2><a name="Applicability" id="Applicability">Applicability of
-the Method</a></h2>
-
-<p>The random sampling of a function clearly works best if the
-function is a constant. (Why don't you then get the exact
-value of π in the example above?) If the function varies
-rapidly, or is locally peaked, the Monte Carlo method will not
-converge as rapidly. What is required is a "guidance" of
-the random sampling that emphasizes sampling where the function
-is large or varies rapidly. Let's illustrate the concept in
-1-D.</p>
-
-<p>< ! - - MATH: $I = \int_0^1 f(x) dx$ - - >
-<img width="102" height="39" align="middle" border="0" src=
-"Monte_Images/img7.gif" alt="$ I = \int_0^1 f(x) dx $" /><br />
-Introduce a weight function <b>w(x)</b></p>
-
-<p>< ! - - MATH: $I = \int_0^1 w(x) \frac{f(x)}{w(x)} dx$ - - >
-<img width="134" height="40" align="middle" border="0" src=
-"Monte_Images/img8.gif" alt=
-"$ I = \int_0^1 w(x) \frac{f(x)}{w(x)} dx $" /><br />
-where <b>w(x)</b> is required to satisfy</p>
-
-<p>< ! - - MATH: $\int_0^1 w(x) dx = 1$ - - >
-<img width="104" height="39" align="middle" border="0" src=
-"Monte_Images/img9.gif" alt="$ \int_0^1 w(x) dx = 1 $" /><br />
-Change variable to y</p>
-
-<p>< ! - - MATH: $y(x) = \int_0^{x'} w(x') dx'$ - - >
-<img width="140" height="44" align="middle" border="0" src=
-"Monte_Images/img10.gif" alt=
-"$ y(x) = \int_0^{x'} w(x') dx' $" /><br />
-such that</p>
-
-<p>< ! - - MATH: $dy = w(x) dx$ - - >
-<b><i>dy</i> = <i>w</i>(<i>x</i>) <i>dx</i></b>, <b><i>y</i>(0) =
-0</b>, and <b><i>y</i>(1) = 1.</b></p>
-
-<p>
-< ! - - MATH: $I = \int_0^1 dy \frac{ f( x(y) ) } { w( x(y) }$ - - >
-<img width="115" height="40" align="middle" border="0" src=
-"Monte_Images/img11.gif" alt=
-"$ I = \int_0^1 dy \frac{ f( x(y) ) } { w( x(y) } $" /><br />
-The weight function <b>w(x)</b> is chosen such that <b>[f/w]</b>
-is aproproximately constant. The Monte Carlo sampling of
-<b>[f/w]</b> with uniform random sampling in <b>y</b> variable
-will then be accurate.</p>
-
-<p>
-< ! - - MATH: $I = \frac{1}{N} \sum_{i=1}^{N} \frac{ f( x(y_i) ) } { w( x(y_i) }$ - - >
-<img width="141" height="40" align="middle" border="0" src=
-"Monte_Images/img15.gif" alt=
-"$ I = \frac{1}{N} \sum_{i=1}^{N} \frac{ f( x(y_i) ) } { w( x(y_i) } $" /><br />
-
-Koonin and Meridith (Computational Physics, 1989) give the
-example</p>
-
-<p>< ! - - MATH: $I = \int_0^1 dx \frac{1}{1+x^2}$ - - >
-<img width="104" height="39" align="middle" border="0" src=
-"Monte_Images/img12.gif" alt=
-"$ I = \int_0^1 dx \frac{1}{1+x^2} $" /><br />
-with</p>
-
-<p>< ! - - MATH: $w(x) = \frac{1}{3} ( 4 - 2 x )$ - - >
-<img width="125" height="34" align="middle" border="0" src=
-"Monte_Images/img13.gif" alt=
-"$ w(x) = \frac{1}{3} ( 4 - 2 x ) $" /><br />
-In this simple case, the inversion x(y) is possible
-analytically.</p>
-
-<p>< ! - - MATH: $y = \frac{1}{3} x ( 4 - x )$ - - >
-<img width="101" height="34" align="middle" border="0" src=
-"Monte_Images/img14.gif" alt=
-"$ y = \frac{1}{3} x ( 4 - x ) $" /><br />
- <b><i>x</i> = 2 - (4 -3 <i>y</i> )</b></p>
-
-<p>In general, this inversion x(y) is difficult or impossible to
-perform analytically. An alternate approach must be used.</p>
-
-<p>The relation <b><i>dy</i> = <i>w</i>(<i>x</i>)
-<i>dx </i></b> implies another interpretation of the
-method. Random numbers uniformly distributed in <b>y</b>
-will be distributed according to <b>w(x)</b> in <b>x</b>.
-Therefore, the guiding action of <b>w(x)</b> can be achieved by
-generating random numbers according the the probability profile
-<b>w(x)</b>. This holds in phase spaces of arbitrary dimension.
-The problem translates into that of finding random distribution
-of arbitrary probability distributions.</p>
-
-<p>Uniform random numbers in interval (0,1) have an average value
-of 0.5. A binning in intervals <b>dx</b> will show an
-average number of random number</p>
-
-<p>number / bin = total # of random numbers / number of
-bins.</p>
-
-<p>The number of random numbers in the bins will fluctuate around
-the average with a Gaussian distribution.</p>
-
-<p>Random numbers distributed according to the distribution
-<b>w(x)</b> will bin according to <b>w(x)</b>.</p>
-<hr />
- <a href="#Monte_Carlo_Techniques"><img border="0" src=
-"../GIF_files_PHYS405/arw-dkblue-up.gif" width="23" height=
-"23" /> </a><b>Back to top of page</b>
-<hr />
-
-<h2><a name="Metropolis Algorithm">Metropolis Algorithm</a></h2>
-
-<p>The Metropolis algorithm generates a random walk guided by the
-weight function <b>w(x)</b>. The algorithm in 2-D involves
-the following steps. Starting from a point
-<b>(x<sub>i</sub>, y<sub>i</sub>)</b>.</p>
-
-<ol>
- <li>choose <b>η</b> (step size)</li>
-
- <li>generate 2 uniformly distributed random numbers,
- <b>R<sub>1</sub></b> and <b>R<sub>2</sub></b>, in the interval
- (-1,1)</li>
-
- <li>calculate a new trial point according to<br />
- < ! - - MATH: $x_{i+1}^T = x_i + \delta x_R$ - - >
- <img width="116" height="34" align="middle" border="0" src=
- "Monte_Images/img16.gif" alt=
- "$ x_{i+1}^T = x_i + \delta x_R $" /><br />
- < ! - - MATH: $y_{i+1}^T = y_i + \delta y_R$ - - >
- <img width="112" height="34" align="middle" border="0" src=
- "Monte_Images/img17.gif" alt=
- "$ y_{i+1}^T = y_i + \delta y_R $" /><br />
- where the steps in <b>x</b> and <b>y</b> are <b>η</b>
- <b>R<sub>1</sub></b> and <b>η</b> <b>R<sub>2</sub></b>
- respectively</li>
-
- <li> evaluate the ratio of the weight function at the
- trial point to the one at the old point<br />
- < ! - - MATH: $r = \frac{ w( x_{i+1}^T, y_{i+1}^T ) } { w( x_{i}, y_{i}
-) }$ - - >
- <img width="110" height="47" align="middle" border="0" src=
- "Monte_Images/img18.gif" alt=
- "$ r = \frac{ w( x_{i+1}^T, y_{i+1}^T ) } { w( x_{i}, y_{i} ) } $" /></li>
-
- <li>if <b>r>1</b>, accept the new point
- <b>(x<sub>i+1</sub>,y<sub>i+1</sub>)</b> =
- <b>(x<sub>i+1</sub><sup>t</sup>,y<sub>i+1</sub><sup>t</sup>)</b>
- and loop back to step (2)</li>
-
- <li>if <b>r<1</b>, accept the new point with probability
- <b>r</b>. That is, generate <b>R</b>, a random number in
- the interval <b>(0,1)</b>, and accept
- <b>(x<sub>i+1</sub><sup>t</sup>,y<sub>i+1</sub><sup>t</sup>)</b>
- as the new point <b>(x<sub>i+1</sub>,y<sub>i+1</sub>)</b> if
- <b>r > R</b> and loop back to step (2)</li>
-
- <li>otherwise, reject it, and keep the previous point as the
- new one <b>(x<sub>i+1</sub>,y<sub>i+1</sub>)</b> =
- <b>(x<sub>i</sub>,y<sub>i</sub>)</b></li>
-
- <li>loop back to step (2)</li>
-</ol>
-
-<p>Koonin and Mederith (Computational Physics, 1989) provide a
-simple proof that the algorithm produces the desired random
-number distribution <b>w(x)</b>.</p>
-
-<p>The step <b>η</b> in the Metropolis algorithm is
-arbitrary. If chosen small, the acceptance ratio will be
-high (a point in the neighborhood of an acceptable point will
-also likely be acceptable). But overall coverage of the
-phase space will be very slow. If <b>η</b> is large,
-the phase space will be efficiently covered, but a new point will
-seldom be accepted by the Metropolis criteria. In either
-case ineficiency follows. An empirical rule is to choose
-<b>η</b> such as to have an acceptance ratio about 0.5 -
-0.6.</p>
-
-<p>A concern about the algorithm is that the random numbers
-generated by the Metropolis algorithm are correlated. A
-way to guard against this (Koonin and Mederith , Computational
-Physics, 1989) is to calculate the correlation
-functions</p>
-
-<p>
-< ! - - MATH: $c(k) = \frac{ <f_if_{i+k}> - <f_i>^2 } { <f_i^2> - <f_i>^2 }$ - - >
-<img width="160" height="43" align="middle" border="0" src=
-"Monte_Images/img19.gif" alt=
-"$ c(k) = \frac{ <f_if_{i+k}> - <f_i>^2 } { <f_i^2> - <f_i>^2 } $" /><br />
-
-and select each <b>k</b><sup><b>th</b></sup> random number
-generated by the algorithm with <b>k</b> selected to produce a
-small value of <b>C<sub>k</sub></b>.</p>
-<hr />
- <a href="#Monte_Carlo_Techniques"><img border="0" src=
-"../GIF_files_PHYS405/arw-dkblue-up.gif" width="23" height=
-"23" /> </a><b>Back to top of page</b>
-<hr />
-
-<h2><a name="Guided" id="Guided">Guided Monte Carlo
-Integral</a></h2>
-
-<p>Lets consider the integral<br />
- <img width="224" height="39" align="middle" border="0"
-src="Monte_Images/img24.gif" alt=
-"$ I = \int dx \int dy (x^2+y^2) e^{ - (x^2+y^2) } $" /><br />
-The integral is peaked (torus shape) around the
-origin. <br />
-<img border="0" src="Monte_Images/Integrated_Function.gif" width=
-"450" height="150" /><br />
-The Gaussian function</p>
-
-<p>< ! - - MATH: $w(x,y) = \frac{1}{\pi} e^{ - (x^2+y^2) }$ - - >
-<img width="149" height="39" align="middle" border="0" src=
-"Monte_Images/img25.gif" alt=
-"$ w(x,y) = \frac{1}{\pi} e^{ - (x^2+y^2) } $" /><br />
-can be chosen to guide the random number selection.</p>
-
-<p>The program <a href=
-"codes/Metropolis_Gaussian_Demo.c">Metropolis_Gaussian_Demo.c</a>
- illustrates a random numbers generation according to
-<b>w(x,y)</b>. Note that arbitrary values for the step size
-(<b>η</b>=0.2) and <b>k</b> (k=1) were chosen; no effort was
-made to chose them optimally.</p>
-
-<p>This code, with 2000 points, produced the following random
-number distribution.</p>
-
-<p><img border="0" src="Monte_Images/2001_04_13_161850_shot.jpg"
-width="377" height="375" /><br />
-The program <a href=
-"codes/Metropolis_Gaussian_Integral.c">Metropolis_Gaussian_Integral.c</a>
- performs the integral.</p>
-
-<p><i>Metropolis_Gaussian_Integral -p > data_file</i><br />
-<i> </i> <i>or</i><br />
-<i>Metropolis_Gaussian_Integral</i></p>
-
-<p>This code prompts the user for more points to be added to the
-Monte Carlo integral. The calculation of the error may show the
-calculation not to be accurate given that the choices of
-<b>η</b> and <b>k</b> were not optimized.</p>
-<hr />
- <a href="#Monte_Carlo_Techniques"><img border="0" src=
-"../GIF_files_PHYS405/arw-dkblue-up.gif" width="23" height=
-"23" /> </a><b>Back to top of page</b>
-<hr />
-
-<h2><a name="Parallel_MC" id="Parallel_MC">Parallel Monte Carlo
-Algorithm</a></h2>
-
-<h2><a name="Random_Numbers_Generation" id=
-"Random_Numbers_Generation">Random Numbers Generation</a></h2>
-
-<p>Pseudo random numbers are often generated by a linear
-congruent algorithm. The integer random number
-<b>x<sub>i+1</sub></b> is generated from the previous one
-by<br />
-<i><b> </b></i> <b><i>x</i><sub><i>i</i>+1</sub> = (
-<i>x</i><sub><i>i</i></sub> <i>a</i> + <i>b</i> ) <sub><i>mod
-c</i></sub></b> <br />
-The coefficients <b>a</b>, <b>b</b> and <b>c</b> are chosen to
-generate statistically good random numbers in a specific integer
-range (Numerical Recipes). Often the random numbers are
-generated using different linear congruence relation, and mixed
-via random selection as in RAND1 and RAND2 (Numerical
-Recipes).</p>
-
-<p>The procedure to generate random numbers is
-deterministic. Two different nodes on a parallel machine
-will produce exactly the same sequence if "seeded" in the same
-way. The question then arises as to how to generate "good" random
-numbers on the different nodes of the parallel computer that will
-remain meaningful when used as a single set.</p>
-
-<p>One solution is to generate sequences on each node based on
-different seeds. The code <a href=
-"codes/random_demo_2_series.c">random.demo_2_series.c</a>
-simulates this approach by generating 1 or 2 sequences of random
-numbers that can be analyzed for their goodness. The idea
-is to look for the goodness of a sequence in which the first and
-second half of the series are generated with different seed
-versus sequences generated from a single seed.</p>
-
-<p><i>random_demo_2_series 2 1000 > data_file</i></p>
-
-<p>The following filters can be used to acquire some feelings for
-the goodness of these random numbers</p>
-
-<ul>
- <li><i><a href="codes/void.c">void.c</a></i> which locates
- voids in the plane covered by 1 or 2 sequences of random
- numbers. This can be used to illustrate differences in coverage
- of the plane in short sequences. Example:<br />
- <i>random_demo_2_series 2 1000 | void > data_file</i><br />
- produces the following graphs in <em>gnuplot</em> for 1 and 2
- seeded series of 1000 points<br />
- <img border="0" src="Monte_Images/2001_04_22_152359_shot.jpg"
- width="214" height="210" /> <img border="0"
- src="Monte_Images/2001_04_22_152333_shot.jpg" width="189"
- height="212" /><br /></li>
-
- <li><i><a href=
- "codes/bin_distribution.c">bin_distribution.c</a></i> which
- bins the random numbers in a 2-D grid, then further bins the
- distribution as a function of the number of random numbers per
- bin.<br />
- <i>random_demo_2_series 2 20000 | bin_distribution >
- data_file<br /></i> produces the following distributions for
- large sequences of random numbers<br />
- <img border="0" src="Monte_Images/2001_04_13_164904_shot.jpg"
- width="375" height="383" /> <img border="0" src=
- "Monte_Images/2001_04_13_164914_shot.jpg" width="372" height=
- "378" /></li>
-</ul>
-
-<p>Clearly the random numbers sequences generated from sequences
-using different seeds have different properties than sequences
-generated from single seed. </p>
-
-<p>We use here the approach that Random numbers should be
-generated on one node and distributed to the nodes which need
-them. This avoids the use of sequences generated from
-different seeds. Realistic applications often require
-time-consuming function evaluations. Therefore the inefficiency
-inherent in our approach is dwarfed by the large function
-evaluation times. This is also the approach used by
-Gropp et al. in "Using MPI".</p>
-<hr />
- <a href="#Monte_Carlo_Techniques"><img border="0" src=
-"../GIF_files_PHYS405/arw-dkblue-up.gif" width="23" height=
-"23" /> </a><b>Back to top of page</b>
-<hr />
-
-<h2><a name="Parallel_MC_Codes" id="Parallel_MC_Codes">Parallel
-Monte Carlo Codes</a></h2>
-
-<p>A parallel implementation of the Monte Carlo integration must
-have the random sequence distributed to the different nodes,
-either by communication or by a local generation of the
-sequence. We implement here three versions of the code.</p>
-
-<h4>Master-Workers paradigms</h4>
-
-<p>This code generates the random numbers on nodes 0 and passes
-them onto the worker nodes. The latter compute the Monte
-Carlo sub-sums and pass these sub-sums to node 0 which finishes
-the sums. The MPI_Reduce() function can be used to expedite
-the global sums</p>
-
-<p><img border="0" src="Monte_Images/case_1.jpg" width="464"
-height="351" /></p>
-
-<p><a href=
-"codes/Metropolis_Gaussian_Parallel.c">Metropolis_Gaussian_Parallel.c</a>
-implements this approach.</p>
-
-<h4>Client-Server paradigm</h4>
-
-<p>This code implements the concept of a random number server;
-this is a node reserved to serve random numbers with weight
-w(x,y) to whichever node requests it. The code sets node 0
-as a master node to direct the calculation and interface with the
-user. Node (<i>numprocs-1</i>) is the random numbers
-server, and nodes <i>1</i> to (<i>numprocs-2</i>) are the worker
-nodes which complete the Monte Carlo sub-sums.</p>
-
-<p><img border="0" src="Monte_Images/case_2.jpg" width="476"
-height="406" /></p>The code <a href=
-"codes/Metropolis_CS.c">Metropolis_CS.c</a> implements a version
-of of this approach in which the functionality of the different
-nodes is relegated to C functions. This illustrates the fact that
-variables are local to the nodes.
-
-<p>Gropp et al, in "Using MPI" implements this version of a Monte
-Carlo integration to calculate π.</p>
-
-<h4><a name="Full_Worker" id="Full_Worker">Full Workers
-version</a></h4>
-
-<p>In this version, node 0 administers the ranges of random
-numbers to be generated for each workers, but leaves the actual
-generation of the random numbers to the nodes themselves.
-Upon receiving a range, i.e., random numbers range 1001 to 2000,
-the worker nodes generate these numbers and calculate the Monte
-Carlo sub-sums. This implies that they must <u>generate</u>
-and <u>discard</u> those random numbers prior to their range
-selection. This is a waste of computing cycles, but given
-the speed of computing versus the network speed it maybe the most
-efficient approach after all.</p>
-
-<p>The scheme is therefore</p>
-
-<img border="0" src="Monte_Images/case_3.jpg" width="443" height="350" />
-
-<p>It is left to the reader to implement this approach.</p>
-
-<!--#include virtual="$root_directory/shared/footer.shtml"-->
+++ /dev/null
-<h2><a name="Applicability" id="Applicability">Applicability of
-the Method</a></h2>
-
-<p>The random sampling of a function clearly works best if the
-function is a constant. (Why don't you then get the exact
-value of π in the example above?) If the function varies
-rapidly, or is locally peaked, the Monte Carlo method will not
-converge as rapidly. What is required is a "guidance" of
-the random sampling that emphasizes sampling where the function
-is large or varies rapidly. Let's illustrate the concept in
-1-D.</p>
-
-<p>< ! - - MATH: $I = \int_0^1 f(x) dx$ - - >
-<img width="102" height="39" align="middle" border="0" src=
-"Monte_Images/img7.gif" alt="$ I = \int_0^1 f(x) dx $" /><br />
-Introduce a weight function <b>w(x)</b></p>
-
-<p>< ! - - MATH: $I = \int_0^1 w(x) \frac{f(x)}{w(x)} dx$ - - >
-<img width="134" height="40" align="middle" border="0" src=
-"Monte_Images/img8.gif" alt=
-"$ I = \int_0^1 w(x) \frac{f(x)}{w(x)} dx $" /><br />
-where <b>w(x)</b> is required to satisfy</p>
-
-<p>< ! - - MATH: $\int_0^1 w(x) dx = 1$ - - >
-<img width="104" height="39" align="middle" border="0" src=
-"Monte_Images/img9.gif" alt="$ \int_0^1 w(x) dx = 1 $" /><br />
-Change variable to y</p>
-
-<p>< ! - - MATH: $y(x) = \int_0^{x'} w(x') dx'$ - - >
-<img width="140" height="44" align="middle" border="0" src=
-"Monte_Images/img10.gif" alt=
-"$ y(x) = \int_0^{x'} w(x') dx' $" /><br />
-such that</p>
-
-<p>< ! - - MATH: $dy = w(x) dx$ - - >
-<b><i>dy</i> = <i>w</i>(<i>x</i>) <i>dx</i></b>, <b><i>y</i>(0) =
-0</b>, and <b><i>y</i>(1) = 1.</b></p>
-
-<p>
-< ! - - MATH: $I = \int_0^1 dy \frac{ f( x(y) ) } { w( x(y) }$ - - >
-<img width="115" height="40" align="middle" border="0" src=
-"Monte_Images/img11.gif" alt=
-"$ I = \int_0^1 dy \frac{ f( x(y) ) } { w( x(y) } $" /><br />
-The weight function <b>w(x)</b> is chosen such that <b>[f/w]</b>
-is aproproximately constant. The Monte Carlo sampling of
-<b>[f/w]</b> with uniform random sampling in <b>y</b> variable
-will then be accurate.</p>
-
-<p>
-< ! - - MATH: $I = \frac{1}{N} \sum_{i=1}^{N} \frac{ f( x(y_i) ) } { w( x(y_i) }$ - - >
-<img width="141" height="40" align="middle" border="0" src=
-"Monte_Images/img15.gif" alt=
-"$ I = \frac{1}{N} \sum_{i=1}^{N} \frac{ f( x(y_i) ) } { w( x(y_i) } $" /><br />
-
-Koonin and Meridith (Computational Physics, 1989) give the
-example</p>
-
-<p>< ! - - MATH: $I = \int_0^1 dx \frac{1}{1+x^2}$ - - >
-<img width="104" height="39" align="middle" border="0" src=
-"Monte_Images/img12.gif" alt=
-"$ I = \int_0^1 dx \frac{1}{1+x^2} $" /><br />
-with</p>
-
-<p>< ! - - MATH: $w(x) = \frac{1}{3} ( 4 - 2 x )$ - - >
-<img width="125" height="34" align="middle" border="0" src=
-"Monte_Images/img13.gif" alt=
-"$ w(x) = \frac{1}{3} ( 4 - 2 x ) $" /><br />
-In this simple case, the inversion x(y) is possible
-analytically.</p>
-
-<p>< ! - - MATH: $y = \frac{1}{3} x ( 4 - x )$ - - >
-<img width="101" height="34" align="middle" border="0" src=
-"Monte_Images/img14.gif" alt=
-"$ y = \frac{1}{3} x ( 4 - x ) $" /><br />
- <b><i>x</i> = 2 - (4 -3 <i>y</i> )</b></p>
-
-<p>In general, this inversion x(y) is difficult or impossible to
-perform analytically. An alternate approach must be used.</p>
-
-<p>The relation <b><i>dy</i> = <i>w</i>(<i>x</i>)
-<i>dx </i></b> implies another interpretation of the
-method. Random numbers uniformly distributed in <b>y</b>
-will be distributed according to <b>w(x)</b> in <b>x</b>.
-Therefore, the guiding action of <b>w(x)</b> can be achieved by
-generating random numbers according the the probability profile
-<b>w(x)</b>. This holds in phase spaces of arbitrary dimension.
-The problem translates into that of finding random distribution
-of arbitrary probability distributions.</p>
-
-<p>Uniform random numbers in interval $[0,1]$ have an average value
-of 0.5. A binning in intervals <b>dx</b> will show an
-average number of random number</p>
-
-<p>number / bin = total # of random numbers / number of
-bins.</p>
-
-<p>The number of random numbers in the bins will fluctuate around
-the average with a Gaussian distribution.</p>
-
-<p>Random numbers distributed according to the distribution
-<b>w(x)</b> will bin according to <b>w(x)</b>.</p>
-<hr />
- <a href="#Monte_Carlo_Techniques"><img border="0" src=
-"../GIF_files_PHYS405/arw-dkblue-up.gif" width="23" height=
-"23" /> </a><b>Back to top of page</b>
-<hr />
-
-<h2><a name="Metropolis Algorithm">Metropolis Algorithm</a></h2>
-
-<p>The Metropolis algorithm generates a random walk guided by the
-weight function <b>w(x)</b>. The algorithm in 2-D involves
-the following steps. Starting from a point
-<b>(x<sub>i</sub>, y<sub>i</sub>)</b>.</p>
-
-<ol>
- <li>choose <b>η</b> (step size)</li>
-
- <li>generate 2 uniformly distributed random numbers,
- <b>R<sub>1</sub></b> and <b>R<sub>2</sub></b>, in the interval
- (-1,1)</li>
-
- <li>calculate a new trial point according to<br />
- < ! - - MATH: $x_{i+1}^T = x_i + \delta x_R$ - - >
- <img width="116" height="34" align="middle" border="0" src=
- "Monte_Images/img16.gif" alt=
- "$ x_{i+1}^T = x_i + \delta x_R $" /><br />
- < ! - - MATH: $y_{i+1}^T = y_i + \delta y_R$ - - >
- <img width="112" height="34" align="middle" border="0" src=
- "Monte_Images/img17.gif" alt=
- "$ y_{i+1}^T = y_i + \delta y_R $" /><br />
- where the steps in <b>x</b> and <b>y</b> are <b>η</b>
- <b>R<sub>1</sub></b> and <b>η</b> <b>R<sub>2</sub></b>
- respectively</li>
-
- <li> evaluate the ratio of the weight function at the
- trial point to the one at the old point<br />
- < ! - - MATH: $r = \frac{ w( x_{i+1}^T, y_{i+1}^T ) } { w( x_{i}, y_{i}
-) }$ - - >
- <img width="110" height="47" align="middle" border="0" src=
- "Monte_Images/img18.gif" alt=
- "$ r = \frac{ w( x_{i+1}^T, y_{i+1}^T ) } { w( x_{i}, y_{i} ) } $" /></li>
-
- <li>if <b>r>1</b>, accept the new point
- <b>(x<sub>i+1</sub>,y<sub>i+1</sub>)</b> =
- <b>(x<sub>i+1</sub><sup>t</sup>,y<sub>i+1</sub><sup>t</sup>)</b>
- and loop back to step (2)</li>
-
- <li>if <b>r<1</b>, accept the new point with probability
- <b>r</b>. That is, generate <b>R</b>, a random number in
- the interval <b>(0,1)</b>, and accept
- <b>(x<sub>i+1</sub><sup>t</sup>,y<sub>i+1</sub><sup>t</sup>)</b>
- as the new point <b>(x<sub>i+1</sub>,y<sub>i+1</sub>)</b> if
- <b>r > R</b> and loop back to step (2)</li>
-
- <li>otherwise, reject it, and keep the previous point as the
- new one <b>(x<sub>i+1</sub>,y<sub>i+1</sub>)</b> =
- <b>(x<sub>i</sub>,y<sub>i</sub>)</b></li>
-
- <li>loop back to step (2)</li>
-</ol>
-
-<p>Koonin and Mederith (Computational Physics, 1989) provide a
-simple proof that the algorithm produces the desired random
-number distribution <b>w(x)</b>.</p>
-
-<p>The step <b>η</b> in the Metropolis algorithm is
-arbitrary. If chosen small, the acceptance ratio will be
-high (a point in the neighborhood of an acceptable point will
-also likely be acceptable). But overall coverage of the
-phase space will be very slow. If <b>η</b> is large,
-the phase space will be efficiently covered, but a new point will
-seldom be accepted by the Metropolis criteria. In either
-case ineficiency follows. An empirical rule is to choose
-<b>η</b> such as to have an acceptance ratio about 0.5 -
-0.6.</p>
-
-<p>A concern about the algorithm is that the random numbers
-generated by the Metropolis algorithm are correlated. A
-way to guard against this (Koonin and Mederith , Computational
-Physics, 1989) is to calculate the correlation
-functions</p>
-
-<p>
-< ! - - MATH: $c(k) = \frac{ <f_if_{i+k}> - <f_i>^2 } { <f_i^2> - <f_i>^2 }$ - - >
-<img width="160" height="43" align="middle" border="0" src=
-"Monte_Images/img19.gif" alt=
-"$ c(k) = \frac{ <f_if_{i+k}> - <f_i>^2 } { <f_i^2> - <f_i>^2 } $" /><br />
-
-and select each <b>k</b><sup><b>th</b></sup> random number
-generated by the algorithm with <b>k</b> selected to produce a
-small value of <b>C<sub>k</sub></b>.</p>
-<hr />
- <a href="#Monte_Carlo_Techniques"><img border="0" src=
-"../GIF_files_PHYS405/arw-dkblue-up.gif" width="23" height=
-"23" /> </a><b>Back to top of page</b>
-<hr />
-
-<h2><a name="Guided" id="Guided">Guided Monte Carlo
-Integral</a></h2>
-
-<p>Lets consider the integral<br />
- <img width="224" height="39" align="middle" border="0"
-src="Monte_Images/img24.gif" alt=
-"$ I = \int dx \int dy (x^2+y^2) e^{ - (x^2+y^2) } $" /><br />
-The integral is peaked (torus shape) around the
-origin. <br />
-<img border="0" src="Monte_Images/Integrated_Function.gif" width=
-"450" height="150" /><br />
-The Gaussian function</p>
-
-<p>< ! - - MATH: $w(x,y) = \frac{1}{\pi} e^{ - (x^2+y^2) }$ - - >
-<img width="149" height="39" align="middle" border="0" src=
-"Monte_Images/img25.gif" alt=
-"$ w(x,y) = \frac{1}{\pi} e^{ - (x^2+y^2) } $" /><br />
-can be chosen to guide the random number selection.</p>
-
-<p>The program <a href=
-"src/Metropolis_Gaussian_Demo.c">Metropolis_Gaussian_Demo.c</a>
- illustrates a random numbers generation according to
-<b>w(x,y)</b>. Note that arbitrary values for the step size
-(<b>η</b>=0.2) and <b>k</b> (k=1) were chosen; no effort was
-made to chose them optimally.</p>
-
-<p>This code, with 2000 points, produced the following random
-number distribution.</p>
-
-<p><img border="0" src="Monte_Images/2001_04_13_161850_shot.jpg"
-width="377" height="375" /><br />
-The program <a href=
-"src/Metropolis_Gaussian_Integral.c">Metropolis_Gaussian_Integral.c</a>
- performs the integral.</p>
-
-<p><i>Metropolis_Gaussian_Integral -p > data_file</i><br />
-<i> </i> <i>or</i><br />
-<i>Metropolis_Gaussian_Integral</i></p>
-
-<p>This code prompts the user for more points to be added to the
-Monte Carlo integral. The calculation of the error may show the
-calculation not to be accurate given that the choices of
-<b>η</b> and <b>k</b> were not optimized.</p>
-<hr />
- <a href="#Monte_Carlo_Techniques"><img border="0" src=
-"../GIF_files_PHYS405/arw-dkblue-up.gif" width="23" height=
-"23" /> </a><b>Back to top of page</b>
-<hr />
-
-<h2><a name="Parallel_MC" id="Parallel_MC">Parallel Monte Carlo
-Algorithm</a></h2>
-
-<h2><a name="Random_Numbers_Generation" id=
-"Random_Numbers_Generation">Random Numbers Generation</a></h2>
-
-<p>Pseudo random numbers are often generated by a linear
-congruent algorithm. The integer random number
-<b>x<sub>i+1</sub></b> is generated from the previous one
-by<br />
-<i><b> </b></i> <b><i>x</i><sub><i>i</i>+1</sub> = (
-<i>x</i><sub><i>i</i></sub> <i>a</i> + <i>b</i> ) <sub><i>mod
-c</i></sub></b> <br />
-The coefficients <b>a</b>, <b>b</b> and <b>c</b> are chosen to
-generate statistically good random numbers in a specific integer
-range (Numerical Recipes). Often the random numbers are
-generated using different linear congruence relation, and mixed
-via random selection as in RAND1 and RAND2 (Numerical
-Recipes).</p>
-
-<p>The procedure to generate random numbers is
-deterministic. Two different nodes on a parallel machine
-will produce exactly the same sequence if "seeded" in the same
-way. The question then arises as to how to generate "good" random
-numbers on the different nodes of the parallel computer that will
-remain meaningful when used as a single set.</p>
-
-<p>One solution is to generate sequences on each node based on
-different seeds. The code <a href=
-"src/random_demo_2_series.c">random.demo_2_series.c</a>
-simulates this approach by generating 1 or 2 sequences of random
-numbers that can be analyzed for their goodness. The idea
-is to look for the goodness of a sequence in which the first and
-second half of the series are generated with different seed
-versus sequences generated from a single seed.</p>
-
-<p><i>random_demo_2_series 2 1000 > data_file</i></p>
-
-<p>The following filters can be used to acquire some feelings for
-the goodness of these random numbers</p>
-
-<ul>
- <li><i><a href="src/void.c">void.c</a></i> which locates
- voids in the plane covered by 1 or 2 sequences of random
- numbers. This can be used to illustrate differences in coverage
- of the plane in short sequences. Example:<br />
- <i>random_demo_2_series 2 1000 | void > data_file</i><br />
- produces the following graphs in <em>gnuplot</em> for 1 and 2
- seeded series of 1000 points<br />
- <img border="0" src="Monte_Images/2001_04_22_152359_shot.jpg"
- width="214" height="210" /> <img border="0"
- src="Monte_Images/2001_04_22_152333_shot.jpg" width="189"
- height="212" /><br /></li>
-
- <li><i><a href=
- "src/bin_distribution.c">bin_distribution.c</a></i> which
- bins the random numbers in a 2-D grid, then further bins the
- distribution as a function of the number of random numbers per
- bin.<br />
- <i>random_demo_2_series 2 20000 | bin_distribution >
- data_file<br /></i> produces the following distributions for
- large sequences of random numbers<br />
- <img border="0" src="Monte_Images/2001_04_13_164904_shot.jpg"
- width="375" height="383" /> <img border="0" src=
- "Monte_Images/2001_04_13_164914_shot.jpg" width="372" height=
- "378" /></li>
-</ul>
-
-<p>Clearly the random numbers sequences generated from sequences
-using different seeds have different properties than sequences
-generated from single seed. </p>
-
-<p>We use here the approach that Random numbers should be
-generated on one node and distributed to the nodes which need
-them. This avoids the use of sequences generated from
-different seeds. Realistic applications often require
-time-consuming function evaluations. Therefore the inefficiency
-inherent in our approach is dwarfed by the large function
-evaluation times. This is also the approach used by
-Gropp et al. in "Using MPI".</p>
-<hr />
- <a href="#Monte_Carlo_Techniques"><img border="0" src=
-"../GIF_files_PHYS405/arw-dkblue-up.gif" width="23" height=
-"23" /> </a><b>Back to top of page</b>
-<hr />
-
-<h2><a name="Parallel_MC_Codes" id="Parallel_MC_Codes">Parallel
-Monte Carlo Codes</a></h2>
-
-<p>A parallel implementation of the Monte Carlo integration must
-have the random sequence distributed to the different nodes,
-either by communication or by a local generation of the
-sequence. We implement here three versions of the code.</p>
-
-<h4>Master-Workers paradigms</h4>
-
-<p>This code generates the random numbers on nodes 0 and passes
-them onto the worker nodes. The latter compute the Monte
-Carlo sub-sums and pass these sub-sums to node 0 which finishes
-the sums. The MPI_Reduce() function can be used to expedite
-the global sums</p>
-
-<p><img border="0" src="Monte_Images/case_1.jpg" width="464"
-height="351" /></p>
-
-<p><a href=
-"src/Metropolis_Gaussian_Parallel.c">Metropolis_Gaussian_Parallel.c</a>
-implements this approach.</p>
-
-<h4>Client-Server paradigm</h4>
-
-<p>This code implements the concept of a random number server;
-this is a node reserved to serve random numbers with weight
-w(x,y) to whichever node requests it. The code sets node 0
-as a master node to direct the calculation and interface with the
-user. Node (<i>numprocs-1</i>) is the random numbers
-server, and nodes <i>1</i> to (<i>numprocs-2</i>) are the worker
-nodes which complete the Monte Carlo sub-sums.</p>
-
-<p><img border="0" src="Monte_Images/case_2.jpg" width="476"
-height="406" /></p>The code <a href=
-"src/Metropolis_CS.c">Metropolis_CS.c</a> implements a version
-of of this approach in which the functionality of the different
-nodes is relegated to C functions. This illustrates the fact that
-variables are local to the nodes.
-
-<p>Gropp et al, in "Using MPI" implements this version of a Monte
-Carlo integration to calculate π.</p>
-
-<h4><a name="Full_Worker" id="Full_Worker">Full Workers
-version</a></h4>
-
-<p>In this version, node 0 administers the ranges of random
-numbers to be generated for each workers, but leaves the actual
-generation of the random numbers to the nodes themselves.
-Upon receiving a range, i.e., random numbers range 1001 to 2000,
-the worker nodes generate these numbers and calculate the Monte
-Carlo sub-sums. This implies that they must <u>generate</u>
-and <u>discard</u> those random numbers prior to their range
-selection. This is a waste of computing cycles, but given
-the speed of computing versus the network speed it maybe the most
-efficient approach after all.</p>
-
-<p>The scheme is therefore</p>
-
-<img border="0" src="Monte_Images/case_3.jpg" width="443" height="350" />
-
-<p>It is left to the reader to implement this approach.</p>