37
$\begingroup$

Reading the book Schaum's Outline of Engineering Mechanics: Statics I came across something that makes no sense to me considering the subject of significant figures:

Schaum's Outline of Engineering Mechanics: Statics fragment I have searched and saw that practically the same thing is said in another book (Fluid Mechanics DeMYSTiFied): Fluid Mechanics DeMYSTiFied fragment


So, my question is: Why if the leading digit in an answer is 1, it does not count as a significant figure?

$\endgroup$
8
  • 40
    $\begingroup$ Note that those 2 passages are clearly plagiarised from one another, so in reality you only have 1 source. $\endgroup$
    – Brondahl
    Commented Dec 23, 2019 at 10:07
  • 7
    $\begingroup$ Note that wikipedia has never heard of this concept. Not a definitive source, obviously, but worth considering in the context of "what is the average STEM professional likely to think". $\endgroup$
    – Brondahl
    Commented Dec 23, 2019 at 10:08
  • 19
    $\begingroup$ @ReinstateMonica--Brondahl-- Those books share an author. So it's the same guy making that claim in two different books. $\endgroup$
    – UTF-8
    Commented Dec 23, 2019 at 14:09
  • 8
    $\begingroup$ A better candidate for what engineering professionals are likely to think is the definition of number of significant figures given by IS 2:1960 and BS 1957:1953. $\endgroup$
    – JdeBP
    Commented Dec 23, 2019 at 17:10
  • 1
    $\begingroup$ A digit is something else than a figure...which does make a difference. $\endgroup$ Commented Dec 24, 2019 at 12:26

5 Answers 5

78
$\begingroup$

Significant figures are a shorthand to express how precisely you know a number. For example, if a number has two significant figures, then you know its value to roughly $1\%$.

I say roughly, because it depends on the number. For example, if you report $$L = 89 \, \text{cm}$$ then this implies roughly that you know it's between $88.5$ and $89.5$ cm. That is, you know its value to one part in $89$, which is roughly to $1\%$.

However, this gets less accurate the smaller the leading digit is. For example, for $$L = 34 \, \text{cm}$$ you only know it to one part in $34$, which is about $3\%$. And in the extreme case $$L = 11 \, \text{cm}$$ you only know it to one part in $11$, which is about $10\%$! So if the leading digit is a $1$, the relative uncertainty of your quantity is actually a lot higher than naively counting the significant figures would suggest. In fact, it's about the same as you would expect if you had one fewer significant figure. For that reason, $11$ has "one" significant figure.

Yes, this rule is arbitrary, and it doesn't fully solve the problem. (Now instead of having a sharp cutoff between $L = 9$ cm and $L = 10$ cm, you have a sharp cutoff between $L = 19$ cm and $L = 20$ cm.) But significant figures are a bookkeeping tool, not something that really "exists". They're defined just so that they're useful for quick estimates. In physics, at least, when we start quibbling over this level of detail, we just abandon significant figures entirely and do proper error analysis from the start.

$\endgroup$
7
  • 4
    $\begingroup$ It's worth noting that the sharp cutoff moves, but the degree of the cutoff is half as much by excluding 1 as a leading significant digit. $\endgroup$
    – Kevin
    Commented Dec 23, 2019 at 20:05
  • 6
    $\begingroup$ @Kevin No, it really is the same. If you count $1$ as a significant digit, then e.g. two sig figs means "anywhere between 1% and 10% uncertainty". If you don't count $1$ as a significant digit, then two sig figs means "anywhere between 0.5% and 5% uncertainty". It's still an order of magnitude range, it's just now better centered about 1%. $\endgroup$
    – knzhou
    Commented Dec 23, 2019 at 20:22
  • 4
    $\begingroup$ Ah - I see where you're coming from. I was viewing it as "the worst level of inaccuracy is X%" - and that algorithm approach decreases X% from 10% to 5%. You're viewing it as "the order-of-magnitude difference difference is always a factor of 10, regardless of where the split is." $\endgroup$
    – Kevin
    Commented Dec 23, 2019 at 20:31
  • 2
    $\begingroup$ @Kevin By that logic, a far better level of inaccuracy is obtained by subtracting four thousand from the number of digits, and calling that the number of significant figures. It's not a useful metric. $\endgroup$
    – wizzwizz4
    Commented Dec 24, 2019 at 14:54
  • $\begingroup$ This seems like a good answer to me. I might expand it a bit by saying something about error analysis and exactly how it sidesteps this problem. I would at least indicate that in physics, one usually quotes quantities as, e.g., $80 \pm 30 cm$ rather than relying on significant figures to tell the story. $\endgroup$ Commented Dec 25, 2019 at 3:11
15
$\begingroup$

This isn't an actual rule. And as some people point out in the comments, it's not even mentioned in the Wikipedia article on significant digits. The rule applies to $0$, not to $1$.

Simple counter-example: $10$. Would the authors claim that this number has no significant digits?

You can verify this by doing a search for "sig fig counter." All of them should tell you that the number in your question has 4 significant figures.

As others note, this boundary condition is clearly arbitrary. But it needs to be consistent across literature, or else confusion abounds when you're working with others. So I'd say ignore the rule.

$\endgroup$
8
  • 7
    $\begingroup$ Not only is this rule "a thing" but it was nearly universal during the slide rule era, though it has fallen out of fashion in the intervening decades. $\endgroup$ Commented Dec 23, 2019 at 19:11
  • 2
    $\begingroup$ Regarding the counter-example: Would you say zero has no significant digits? The number of such digits is only a mechanism to gauge how certain you are about that value. For example: if the actual value is 1.00001, but I can only measure hundredths and therefore see 1.00, I could say it's one with three significant digits. (Or according to those authors, two sig. digits). Actual error analysis will always be more robust, though. $\endgroup$
    – Phlarx
    Commented Dec 23, 2019 at 21:11
  • 3
    $\begingroup$ Where does your "traditional definition" come from? I do make the difference between 20 and 20.0, for instance. $\endgroup$
    – Blackhole
    Commented Dec 23, 2019 at 22:33
  • 1
    $\begingroup$ There is no universally agreed set of rules for significant figures. The porblem being that the whole notion is a blunt instrument (though too important to simply do away with altogether) and all sets of rules have bad corner cases. Various industries do have standards documents, however, so if you work in those fields you can point to an authoritative source and say "This is how we do it". It's just that you won't find universal agreement. I find that chemists are much more unified in their approach than physicists. $\endgroup$ Commented Dec 24, 2019 at 0:47
  • 2
    $\begingroup$ BTW, the answer to your question about "10" is that on a slide rule that would be $1.00 \times 10^1$, so it "obviously" has two sig-figs, and for actual integer values the whole notion is misplaced. Context matters. $\endgroup$ Commented Dec 24, 2019 at 0:49
12
$\begingroup$

Truncating numbers to a certain precision is completely arbitrary. There's no reason not to make it more arbitrary.

It seems like someone didn't like the step in precision between 9.99 and 10.0 so they moved it to between 19.99 and 20.0.

In any field where results are clustered around a power of 10, doing this may be beneficial.

$\endgroup$
3
  • 1
    $\begingroup$ Uh, no, it's not just "we simply moved it." The level of imprecision between 9.99 and 10.0 is twice what it is from 19.99 and 20.0. This rule tightens the allowed level of imprecision for a set amount of significant digits. $\endgroup$
    – Kevin
    Commented Dec 23, 2019 at 20:03
  • 2
    $\begingroup$ but only because the numbers are twice as big the step is still approximately 10 times. $\endgroup$
    – Jasen
    Commented Dec 23, 2019 at 21:46
  • $\begingroup$ ... however arbitrary you might think it, it's partly just common sense. Is "100.0" really ten times more precise than "99.0" if measured with the same instrument, say? $\endgroup$ Commented Dec 24, 2019 at 18:17
3
$\begingroup$

It's Experiment Time!

(I was starting to see both points of view on whether to drop the 1, and was curious if there was some objective way of tackling the problem... so I figured it might be a good opportunity for an experiment. For Science!)

Assumptions: Significant Digits are a way of signifying precision on a number - either from uncertainty of measurement or as the result of calculations on a measurement. If you multiply two measurements together, the result has the same number of significant digits as the lower of the two starting values (so 3.8714 x 2.14 has three digits total, not seven like you'd get from plugging it into a calculator.)

That 'calculation' part is what I'd like to take advantage of. Because arguing significant digits on a number in a vacuum is just semantics. Seeing how the precision carries forward with actual operations gives an actual testable prediction. (In other words, this should remove any sort of 'cutoff' issue. If two numbers have X significant digits, then the multiplication of them should have an accuracy of roughly X significant digits - and the validity of the how you determine what's a significant digit should translate accordingly.)

Experimental Layout

Generate two high precision, Benford-compliant coefficients (I'm not actually sure Benford matters in this experiment, but I figured I shouldn't omit any possible complicating factors - and if we're talking physics, our measurements should be fit Benford's Law.) Perform an operation like Multiplication on them. Then, round those same coefficients down to 4 digits after the decimal, and perform the same multiplication on those rounded values. Finally, check how many digits the two resulting values have in common.

Aka, check how well the imprecise 'measurement' version compares the actual, hidden, high-precision calculation.

Now, in an ideal world, the value would be 5 matching (significant) digits. However, since we're just blinding checking whether digits match, we're going to have some that match by sheer luck.

Experimental Results For Multiplication

Digits Matching Where Result Doesn't Start With One
    ... and no input value starts with One:
            5th digit matches 89.7%
            6th matches 21.4%
    ... and one input value starts with One:
            5th digit matches 53.7%
            6th matches 5.57%
    ... and two input values start with One:
            5th digit matches 85.2%
            6th matches 11.1%
Digits Matching Where Result Starts With One:
    ... and no input value starts with One:
            5th digit matches 99.9+%
            6th matches 37.8%
    ... and one input value starts with One:
            5th digit matches 99.9+%
            6th matches 25.5%
    ... and two input values start with One:
            5th digit matches 95.0%
            6th matches 13.9%

Conclusions For Multiplication

First, multiplying two numbers and ending with a number that starts with 1, you should probably count the 1 as a significant digit. In other words, if you multiply '4.245' x '3.743', and come up with '15.889035', you should probably leave it at '15.89'. If you add an additional digit and call it '15.889', you have a 38% chance of that final digit being correct... which probably isn't high enough to be defensible to include.

But multiplying where one of the inputs starts with 1, and it gets strange. Multiplying '1.2513' x '5.8353', and realistically, you don't have five significant digits in your result. According to the experiment, you've got four digits... and a 54% chance of being right with that fifth value. Well, if a 38% chance in the prior situation (multiplying two numbers and ending with a value starting with '1') of getting an 'extra' significant digit isn't acceptable, then it's probably fair to say the 54% chance in this situation is also probably too low to justify including the 5th digit.

So you might be tempted to say "Don't treat a leading 1 as significant as an input to a calculation"... except that multiplying 1.##### x 1.#### (two numbers that start with 1) gives you 85.2% accuracy on that fifth digit - which is pretty much the same level of accuracy where none of the three numbers begin with a 1. So if 8.83 x 8.85 should have three significant digits, so should 1.83 x 1.85.

Final Conclusion: It's actually a deceptively difficult problem to figure out a good heuristic. Especially since there's a pretty big difference between a measurement of 1.045 that's fed into the input of a calculation, and the 1.045 that comes out as a result of a calculation. Which explains why there are multiple methods of handling leading 1's. (If I were forced to choose a Heuristic, it would be: don't count the leading '1' on any measurements performed, but count it for the output of any calculations.)

$\endgroup$
2
$\begingroup$

Keeping track of "significant digits" is a heuristic for indicating approximately the precision of a number. It's not a substitute for a real uncertainty analysis, but it's good enough for many people and many purposes. When some people run up against the limitations of significant figures, they have enough background (or colleagues with enough background) to switch to a more serious error analysis. When other people run up against those same limitations, they try to "fix" the significant-digits approach by creating new ad-hoc rules like this one.

Let's suppose that you and I are independently analyzing the same data set. Each of us has measured the same quantity to two significant figures: your result is 0.48, and my result is 0.52. Since a healthy significant-figure analysis retains one least-significant digit whose value is only mostly trustworthy, it's not clear whether our measurements agree or not; that level of disagreement is interesting and we might end up discussing how to turn that into a three-significant-figure experiment, in case we've both correctly measured a "true" value closer to 0.498.

Now imagine a different universe where we both do the same experiment, but a different definition somewhere means that our "results" are different numerically by a exact numerical factor of twenty. Your measurement in this universe is 9.6, and mine is 10.4. There's still an interesting tension between those numbers. But if I count the leading 1 as one of my two significant digit s, I should report my result as "10", suggesting it is equally likely to be "9" or "11." If you report 9.6 and I report 10, the tension between our results is much less obvious. Also it appears that my result is ten times less precise than yours. I shouldn't be able to change the precision of a number by doubling or halving it.

That's the logic for keeping track of a "guard digit" if a number happens to fall in the bottom part of a logarithmic decade. (The Particle Data Group keeps a "guard digit" if the first two significant digits are between 10 and 35.) But to explain this by saying that "a leading 1 isn't a significant digit," as your source does: that's terribly confusing. I'd find a book written by someone else and read the author you quote here with some caution.

@supercat reminds me in a comment that there is a compact convention for representing real uncertainties that's become popular in the literature in the past couple decades: one writes theuncertainty in the last few digits in parentheses just after the number. For example, one might write $12.34(56)$ as a shorthand for $12.34\pm 0.56$. This approach is nice in the precision measurements business, where there are many significant figures. For example, the current Particle Data Group reference reports the electron mass (in energy units) as $0.510\ 998\ 950\ 00(15)\,\mathrm{ MeV}/c^2$, which is much easier to write and to parse than $0.510\ 998\ 950\ 00\,\mathrm{ MeV}/c^2 \pm 0.000\ 000\ 000\ 15 \,\mathrm{ MeV}/c^2$.

I haven't seen that approach much in material for introductory students, and I can think of a couple reasons why. The "significant figure rules" are, for most people, the first time they learn that arithmetic is something you can do with numbers that are not exact. Many students are intellectually unprepared for that idea: they're ready to write 0.5 instead of 1/2, but they're vague on whether to decimalize 1/7 as 0.1 or as 0.1428571429, because the latter is how it comes out of the calculator. Furthermore, to use the parenthesis notation, you have to have some understanding of significant figures already. To combine my examples above, most people who aren't in the precision measurements business (where understanding the uncertainty may be more challenging than understanding the central value) would write 12.3(6) rather than keeping the guard digits in 12.34(56). But if you were multiply that value by twenty, it would become 246.8(11.2). Whether to record it thus, or as 247(11), or as $250\pm10$, winds up raising the same issues about guard digits that started this question. While the ambiguity is moved from the central value to the uncertainty, so the stakes for misjudging are lower, explaining this to a person who is new to the idea of careful imprecision is a tall order.

$\endgroup$
5
  • $\begingroup$ It's too bad no convention emerged to distinguish between values have differing levels of uncertainty in the last place, perhaps replacing the last digit with 0/2 or 1/2, or 0/4, 1/4, 2/4, or 3/4 so that the biggest change in expressed uncertainty between adjacent levels of precision would be a factor of 2.5 rather than a factor of ten. $\endgroup$
    – supercat
    Commented Dec 25, 2019 at 16:40
  • $\begingroup$ @supercat There is such a convention. I've updated the answer. $\endgroup$
    – rob
    Commented Dec 25, 2019 at 19:59
  • $\begingroup$ I'd not seen that convention. I do remember a rather ancient (probably 1970s) periodic table which marked some of the atomic masses with an asterisk indicating that they were +/- 4 in the last place, while other values were within +/- 1 in the last place. Is there any convention for distinguishing between values that are within 0.501ulp, 0.75ulp, or 1ulp? Also, another thing I've thought should be standardized is a means of indicating values that should be considered exact to arbitrary precision. If one has eight shelves with of eight rows of eight columns of blocks, one doesn't have... $\endgroup$
    – supercat
    Commented Dec 25, 2019 at 20:10
  • $\begingroup$ "500" blocks (one significant figure), but 512 exactly. $\endgroup$
    – supercat
    Commented Dec 25, 2019 at 20:11
  • $\begingroup$ When sub-ULP precision matters, then you're doing a real uncertainty analysis rather than using significant digits as a shorthand. The most common way to indicate this is to add one or more guard digits when recording the uncertainty. Note that modern analysis is often done end-to-end using double-precision floating-point numbers on computers, which have about fifteen significant figures; most of that precision could be considered guard digits. For exact values, the reliable way to communicate them is an explanatory sentence. @supercat $\endgroup$
    – rob
    Commented Dec 25, 2019 at 21:21

Not the answer you're looking for? Browse other questions tagged or ask your own question.