6
$\begingroup$

In the sense of the calculation of the expected value of objective functions, we have two choices to evaluate the value; 1. Sample Average Approximation (SAA): $$ \frac{1}{N}\sum_{i=1}^N f(x,\xi^i). $$ 2. Numerical Integration (e.g., Monte Carlo method): $$ \frac{1}{N} \sum_{\xi^i\in\Xi^N}^N f(x,\xi^i)p(\xi^i), $$ where $p(\cdot)$ is a probability density function and $\Xi^N\subset\Xi$ is the approximation of the support $\Xi$ of the probability function.

The value of SAA converges to the true expectation, which is $\mathbb{E}[f(x,\xi)]$, as $N\to\infty$ as well as the Numerical Integration though it is known as the law of large numbers. However, from the numerical viewpoints in general, what is the difference between these two methods?

I want to know in the stochastic optimization or the expected residual minimization in the sense of the least squared mean for a system of equations, how is the difference such as convergence rate?

$\endgroup$
2
  • 2
    $\begingroup$ Here is an interesting paper: Monte Carlo Sampling Approach to Stochastic Programming $\endgroup$
    – TheSimpliFire
    Commented Aug 18, 2021 at 19:25
  • $\begingroup$ @TheSimpliFire Thanks for sharing an article. I somehow understand that SAA has the same rate of convergence as the Monte Carlo method. Overall, the SAA has more advantages than the Monte Carlo method in terms of implementation I think. That's why, most of stochastic optimization techniques in recent years focus on SAA or its variant, e.g., variance reduction schemes. $\endgroup$
    – Keith
    Commented Aug 19, 2021 at 5:18

0