Monte Carlo methods (MCMs) are based on the
simulation of stochastic processes whose expected values are equal to
computationally interesting quantities. MCMs offer simplicity of
construction, and are often designed to mirror some process whose
behavior is only understood in a statistical sense.
Despite the universality of MCMs,
a serious drawback is their slow convergence. The error of MCMs is
stochastic, and the uncertainty in an average taken from N samples is
O(N-1/2) by virtue of the central limit theorem.
One generic approach to improving the convergence of MCMs has been
the use of highly uniform, quasirandom, numbers (QRNs)
in place of the usual
pseudo random numbers (PRNs). While PRNs are constructed
to mimic the behavior of truly random numbers
QRNs are constructed to be distributed as evenly as
mathematically possible. With QRNs, for example, the convergence of
numerical integration can sometimes be improved to O(N-1).
There are many Monte Carlo application areas besides numerical integration. A wide class of such applications can be characterized as those based on the generation of random walks via a Markov process. In these cases the Monte Carlo method is based on the probabilistic application of a linear operator. However, the investigation of the applicability of QRNs for convergence acceleration for a wide class of problems based on Markov chain MCMs have never been carefully undertaken. Therefore, we are currently working to determine best practices and performance of quasi-Monte Carlo methods for Markov chain problems. Such a study includes a theoretical investigation into the convergence and computational complexity of these methods as well as empirical comparisons of these methods to standard MCMs. |