Lilliefors test is a well-known statistical test for normality. Its idea is based on the Kolmogorov-Smirnov test, except the CDF is replaced by the CDF of the normal distribution with $\mu, \sigma^2$ chosen as the estimated ones from the data.
In other words given data $\{X_1, \ldots, X_n\}$ we form its empirical CDF $F_n$, and consider the estimand $$E := \sup_x \left|F_n(x) - \text{cdf}_{N(\hat{\mu}, \hat{\sigma^2})}(x)\right|.$$ The idea is that if the data does come from a normal distribution, then $E$ must be small. In [1], Lilliefors claimed to use Monto Carlo method to compute numbers that defines the smallness.
Question
Since $E$ is not distribution-free, shouldn't have Lilliefors done a Monto Carlo numerical estimate for all kinds of $\mu$ and $\sigma^2$? Why would a single table be enough?
Reference
- [1] On the Kolmogorov-Smirnov Test for Normality with Mean and Variance Unknown-[Hubert W. Lilliefors]
- [2] The Probability Integral Transformation When Parameters are Estimated from the Sample-[F. N. David, N. L. Johnson]
\hat\sigma^2, \hat{\sigma^2}, \widehat\sigma^2, \widehat\sigma^{\,2}, \widehat{\sigma^2}
$\text{?} \qquad$ $\endgroup$