I understand that a common convention is to report the uncertainty of an analogue instrument as plus or minus half of the smallest scale division.
By this logic, a measuring cylinder with $\pu{0.5 mL}$ graduations has an uncertainty of $\pu{\pm 0.25 mL}$.
Because the measurement should also be reported to the same number of decimal places as the uncertainty, however, this suggests that any measurements with this measuring cylinder should be reported to two decimal places (e.g., $\pu{(17.85 \pm 0.25) mL}$).
Would this not be an example of false precision? Reporting a volume measurement with 2 decimal places seems excessive for a measuring cylinder. Would it not make more sense to report an uncertainty of $\pu{0.3 mL}$ (0.25 rounded to 1 d.p.) and a measurement with only one decimal place (e.g. $\pu{(17.9 \pm 0.3) mL}$)?
Before you answer, I understand that the rules for uncertainty are not set in stone and depend on the experimenter's judgement to some extent. However, I'd love to know which approach would be more conventional in this situation. Thanks so much!
\pu{}
syntax, which allows you to type recordings without unwanted line breaks between the floating number, and the unit of the property. $\endgroup$