0
$\begingroup$

I don't know if its right place to ask this question since chemistry and programming both are involved.

I was working on particle tracking of series of pictures (.tiff) of colloids. And I was Using trackpy in python.

The code gives me the position of particle to suppixel accuracy. code snippets

There is another step involved is of checking the subpixel accuracy. In this method we check the uniformity in the decimal part of the position

a quick way to check for subpixel accuracy is to check that the decimal part of the x and/or y positions are evenly distributed. Trackpy provides a convenience plotting function for this called subpx_bias

enter image description here

Mask size is more or less the diameter of particle.

I am not getting how does evenly distribution of decimal part ensures that we are on the right track? And how a dip shows that we are wrong somewhere?

You can also refer to Eric weeks website. on this website it is briefly mentioned that

One failure mode is if the length scale in feature is made too small, then all the x and y coords get 'rounded off' to the nearest pixel value. The above command plots a histogram of the fractional part of the x-coords of the image. The physically distributed positions should be random -- giving a flat histogram. If the histogram has two peaks (near 0 and 1), set the size parameter in 'feature' a little bigger, determine a new masscut, and repeat until everything is happy.

$\endgroup$
6
  • 1
    $\begingroup$ Suppose you generate uniformly distributed random real numbers between 2 and 3, having 3 decimal places. Then subtracting 2 from each one will give uniformly distributed (roughly flat histogram) decimal fractions between 0 and 1. But if you had simply rounded off the original numbers to integers, you would only get 2 and 3. So that test is just to make sure the “length scale” is not corrupting the data. Make sense? $\endgroup$
    – Ed V
    Commented May 26, 2021 at 2:16
  • $\begingroup$ But , then how does increasing of diameter (size parameter of feature) makes the histogram flat $\endgroup$
    – crabNebula
    Commented May 26, 2021 at 2:24
  • $\begingroup$ If the mask size is too small, you are biasing toward integers: the particles are being sorted, as it were, into integer sizes. This is a quantization error or discretization error, With a larger mask size, you effectively “dither” the size estimation and reduce the quantization error. Kind of like adding a little white noise to dither the pixels and avoid the chunky quantization. $\endgroup$
    – Ed V
    Commented May 26, 2021 at 2:38
  • $\begingroup$ can you please write a answer, I am just a beginner and not so clear about mask size too $\endgroup$
    – crabNebula
    Commented May 26, 2021 at 2:44
  • 1
    $\begingroup$ This is not really a chemistry question: it is an instrument behavior or statistics question. I have not done any particle tracking experiments. Maybe this question should be migrated (not cross-posted) at the signal processing stack exchange or the CV stack exchange. $\endgroup$
    – Ed V
    Commented May 26, 2021 at 2:57

0

Browse other questions tagged or ask your own question.