5
$\begingroup$

It is a very general question. Is there a good way to systematically increase the accuracy and precision of a measuring tool using only mathematical means? For example, average 10 measurements can create a better one. Or using two independent tools. I don't know if measuring theory or statistical quality control or other subject can help.

$\endgroup$
2
  • $\begingroup$ such a situation would be very context specific, but have a look at kalman filters as an example where a measurement could serve as adding information (i.e. an 'update') to already known information, rather than as the sole information by itself. $\endgroup$ Commented Aug 28, 2019 at 23:03
  • $\begingroup$ @Tasos Papastylianou Yes, the Kalman filter could be helpful, especially if the real value is not really constant, but is possibly drifting, like temperature, atmospheric pressure, speed, etc. $\endgroup$
    – Poutnik
    Commented Aug 30, 2019 at 12:38

4 Answers 4

9
$\begingroup$

Generally, improvement of measurement process has much better positive impact on results, than sophisticated mathematical processing of low quality data. But, sometimes we are forced to data what we have.

There are 3 quantities:

resolution = the smallest difference of directly measured value, distinguishable by the tool,

precision = Level of uncertainty of repeated measurement of the same value, due random fluctuances. It is affected by the resolution as well, if comparable with precision.

accuracy = Level of agreement of the estimated and real value, due the tool/method bias.

There can be a very precise but inaccurate tool/method and vice versa, even if one usually comes with the other.


Precision can be improved by measurement repetition. The arithmetic mean of $n^2$ measurements has $n$ times better precision than a single measurement.

The estimation of the standard deviation of normal Gauss distribution of measurements is

$$s=\sqrt{\frac{\Sigma ( x - \overset{-}x )^2}{n-1}}$$

The estimation of the standard deviation of normal Gauss distribution of real value estimation by arithmetic mean is:

$$s_{\overset{-}x}=\sqrt{\frac{\Sigma ( x - \overset{-}x )^2}{n(n-1)}}$$

For a method consisting of multiple measurement steps, each with their own error, the error propagation rule has to be followed.

If 2 quantities, subjected to random errors, are added or subtracted, the squares of their standard deviation estimations are additive

$$(s_{A\pm B})^2=(s_{A})^2+(s_{B})^2$$

If quantities are multiplied or divided, their relative standard deviation estimations are propagated:

$$\left(\frac{s_{A\cdot B}}{A\cdot B}\right)^2 = \left(\frac{s_{A}}{A}\right)^2 + \left(\frac{s_{B}}{B}\right)^2$$ $$\left(\frac{s_{A/B}}{A/ B}\right)^2 = \left(\frac{s_{A}}{A}\right)^2 + \left(\frac{s_{B}}{B}\right)^2$$

If we e.g. determine the difference of the standard solution volumes $A - B$, where $A=20.00 \pm 0.03$, $B=10.00 \pm 0.04$,

then $A-B=10.00 \pm 0.05$,

as $\sqrt{0.03^2 + 0.04^2}=0.05$.


Accuracy is harder to improve by mathematical means. It can be improved by measurement bias analysis, by calibration of tools or methods and by combination of more tools / methods / labs.

There is an additional decrease of the measurement method precision due statistical error of calibration via rules above.

$$(s_\mathrm{rel,method})^2 = (s_\mathrm{rel, meas})^2+(s_\mathrm{rel, cal})^2$$

More tools/methods/labs lead to the "mean of the means" and partially decreases the possible bias, similarly as averaging of measurements improve precision.

But it is more costly.

$\endgroup$
2
  • $\begingroup$ Nice answer, +1. "Accuracy is harder to improve": I'm not so sure about this, though. IMHO noise/random uncertainty is easy to tackle in theory only. In practice, interesting/important sources of noise often turn out to be quite costly or even impossible to tackle as well. $\endgroup$
    – cbeleites
    Commented Aug 29, 2019 at 20:37
  • $\begingroup$ Well, better precision is generally easier to achieve than better accuracy by mathematical means in the context of the question. $\endgroup$
    – Poutnik
    Commented Aug 30, 2019 at 6:03
6
$\begingroup$

Your non-trivial question has a yes and a no. It is not possible to increase the accuracy of a dataset just by mathematical means. If you knew the systematic error in your data then you can do the arithmetic to correct the error. There is no a priori way to ascertain if your measurement tool lacks accuracy until and unless we have a reference for comparison. The simplest example is that of an electronic balance. If we weigh something, there is no way to confirm if the displayed value is correct or not. One can only check that by weighing a reference weight, a known mass. SI got rid of these physical references lately. One might argue that mass can now be measured with Planck's constant, yes, we still need to calibrate/determine the local value of "g" in that area.

Now coming to the second part: Can we improve the precision? This is where mathematics can help. The simplest way is that you make a lot of measurements and assume that there is no systematic error. Lack of precision means you have a lot of noise in the measurement. A very advanced technique is Hilbert's transform to reduce noise in the measurement process. You can also apply a smoothing function on the data to reduce noise.

$\endgroup$
1
2
$\begingroup$

To add to @Poutnik's and @M.Farooq's answers:

Is there a good way to systematically increase the accuracy and precision of a measuring tool using only mathematical means?

No, because what you can and should do in terms of maths/statistics depends very much on your application and data generation/measurement processes.


Using only mathematical means is not going to work because you'll need to know where your error comes from in order to efficiently tackle it.
That is, besides the difference in treatment between systematic and random errors (see Poutnik's answer), there usually isn't only one error but several sources of error/uncertainty. Besides the mathematical/statistical tools to tackle the errors, knowledge about the application background and your measurement instruments is needed to identify possible/likely sources of error.
Once you have them, you can either measure their contribution experimentally - this is where maths/statistics can first be used and find out which are the few most important ones. (If you know the situation sufficiently well to judge this without the need for detailed experimentation, that's fine, too).


Repeated measurements vs. pooling results from different methods: this will also depend on what your main source of error is. If it is measurement noise then repeated measurements help (usually according to the $n^2$ rule Poutnik cited for the sources of error which your repetitions cover). You'll need to do your repetitions at the correct "level" to address the actual source of the noise though:

Assume you have e.g. field sampling error of standard deviation 3 and instrument noise of standard deviation 1, and they just add up. Total error (error propagation for additive, independent sources of noise) is $\sqrt{3^2 + 1^2} \approx 3.16$ (standard deviation).

  • You decide to 100 runs of the instrument, reducing instrument error standard deviation by a factor of $\frac{1}{\sqrt{100}} = \frac{1}{10}$. Thus, total error now is $\sqrt{3^2 + 1^2\cdot\frac{1}{10}} \approx 3.02$, so an improvement of almost 5 %. I.e., about 95 % of total error is field sampling error.
  • If instead you add a 2nd field sample, total error is $\sqrt{3^2\cdot\frac{1}{2} + 1^2} \approx 2.34$, i.e. total error is improved by a quarter.
    This level of improvement is already impossible to achieve with improvements in the instrument part.

In conclusion, pooling results from different methods can be beneficial if your sample (matrix) is complicated so you expect different bias (e.g. masking, ...) for the different methods. Usually, one would not just average those results but again use chemical and application knowledge to interpret and weight these results wrt. their reliability and about the sample composition.


Possibly helpful literature fields (unfortunately, to my knowledge this information is quite spread throughout various fields):

  • A Measurement theory/metrology textbook will give your basic concepts.
  • Quality control literature has e.g. experimental designs to measure the size of various contributors of random uncertainty.
  • In statistics, so called mixed models can deal with situations where systematic and random influencing factors occur.
  • Chemometrics literature also has relevant contributions, particularly with respect to multivariate data.
  • Literature about your instrumental method that discusses the various noise sources for this type of instrumentation.
  • Application literature may tell you about particular application-related sources of error.
  • Sampling has its own set of literature
$\endgroup$
2
$\begingroup$

Experimental design also comes into the fray. If you have an apparatus that is not quantitatively accurate, you can apply it to your experimental ssmple and to a control for ccx which you have a reference value of what you are trying to measure. You then calibrate your results to the known reference. I've used this approach at work, to compare acid pickling times of differently processed samples of a steel product we're developing.

$\endgroup$
1
  • 1
    $\begingroup$ ... and in doing so, please make sure the noise on your calibration results is really negligible compared to the noise on sample measurements. Otherwise, you turn a random uncertainty (noise, variance) into a new bias! $\endgroup$
    – cbeleites
    Commented Aug 29, 2019 at 20:41

Not the answer you're looking for? Browse other questions tagged or ask your own question.