Let's assume a stochastic simulation or test with a control variable. The task is to visualize the distribution to demonstrate the effect that is being researched. The objective is to get smooth plot, without losing the resolution on important effects. For smoothness the plot needs to be interpolated.
Simulation or test can be done with different continuous values of a control variable or predictor $X$. The samples have noise, for there is natural variation with unknown distribution.
The control variable is interesting around a reasonable range. Within the range, a grid of $M$ points of control variable is sampled $N$ times. Total number of samples is thus $MN$.
There are two options:
Calculate a denser grid (greater $M$), with less times (smaller $N$)
Calculate sparser grid (smaller $M$), with more times (greater $N$)
Question: Are there any rules of thumb or theorems which would help in making the decision for better-looking results?
A denser grid will not help if the data is too noisy, but decreasing noise by having means of data with large $N$ will lead to poor resolution.
Will I even gain anything with a larger $N$? It is also about the method of choice for interpolation, but are there in general any preferable ways for analysis.
Background: In my case the results are from a simulation of a complex system, so there is not necessarily any analytic solutions that is right. Without analytic solution the interpolation has no right formula to estimate. Without some analytic solution for which some parameters are to be approximated, the methods need to rely only on the data; to be nonparametric.
The interpolation methods that are sensible are thus splines and nearest data-point like. The results are fairly continuous; small changes in a control variable lead to small changes in outcomes. The interpolation thus makes sense.