Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

3
  • 3
    Great answer. This is really the clearest and most sensible explanation.
    – not2qubit
    Commented Dec 29, 2017 at 14:43
  • 1
    Actually, back when IP was designed and some of the early groundwork for it was laid down, working with data in integer multiples of 8 bits wasn't a given. Many architectures at that time had registers and word sizes in multiples of 12 or 18 bits, for example. This is one reason why octal was so popular at the time; 18 bits can be represented as exactly 6 octal digits with no loss and no waste; 12 bits is 4 octal digits. Microcomputers typically worked in 8-bit quantities, but it was only much later that microcomputers started being regularly connected (especially directly) to the Internet.
    – user
    Commented Dec 30, 2017 at 18:27
  • 1
    Very nice answer, @Dario Fumagalli. I also like your comments on SE Politics! Commented Feb 7, 2022 at 6:14