I don't know if its right place to ask this question since chemistry and programming both are involved.
I was working on particle tracking of series of pictures (.tiff) of colloids. And I was Using trackpy in python.
The code gives me the position of particle to suppixel accuracy.
There is another step involved is of checking the subpixel accuracy. In this method we check the uniformity in the decimal part of the position
a quick way to check for subpixel accuracy is to check that the decimal part of the x and/or y positions are evenly distributed. Trackpy provides a convenience plotting function for this called subpx_bias
Mask size is more or less the diameter of particle.
I am not getting how does evenly distribution of decimal part ensures that we are on the right track? And how a dip shows that we are wrong somewhere?
You can also refer to Eric weeks website. on this website it is briefly mentioned that
One failure mode is if the length scale in feature is made too small, then all the x and y coords get 'rounded off' to the nearest pixel value. The above command plots a histogram of the fractional part of the x-coords of the image. The physically distributed positions should be random -- giving a flat histogram. If the histogram has two peaks (near 0 and 1), set the size parameter in 'feature' a little bigger, determine a new masscut, and repeat until everything is happy.