Monte Carlo methods

Suppose we want to estimate the value of an integral

\begin{equation} I=\int_Dg(x)dx, \end{equation} where $D$ is a $d$-dimensional cube $[0,1]^d$ and $x\in D$. We could use Gaussian quadrature to calculate the integral but if the domain $D$ is in a high-dimensional space or if the function $g(x)$ has nonlinear behavior then the estimation would be computationally expensive.

In such case, the use of Monte Carlo methods would be a more suitable approach. For that, we treat the independent variable $x$ and the function $g(x)$ as a random variable and a random function, respectively. Then we draw $N$ independent and identically distributed random samples $x_1,\ldots,x_N$ from the uniform distribution. And thus, the classic Monte Carlo estimator for the integral is defined as

\begin{equation} I\approx I_N=\frac{1}{N}\sum_{n=1}^Ng(x_n), \end{equation} which is equivalent to finding the expectation $\mathbb{E}[g(x)]$ of the function $g(x)$. The law of large numbers shows that the value $I_N$ converges to the integral $I$ as $N\rightarrow\infty$ with probability 1.