48

This question got me wondering about the differences between these three ways of measuring size: a kibibyte, a kilobit, and the conventional kilobyte.

I understand that these measurements have different uses (data transfer rate is measured in bits/sec), but I'm not quite sure if I can tell the difference between Mb and MB and MiB.

Here is a comment, reproduced below, taken from this answer (emphasis mine).

The C64 has 65536 bytes of RAM. By convention, memory size is specified in kibiBytes, data transfer rates in kilobits, and mass storage in whatever-the-manufacturers-think-of-now-Bytes. Harddrives use T, G, M and k on the label, Windows reports the size in Ti, Gi, Mi and ki. And those 1.44MB floppys? Those are neither 1.44MB nor 1.44MiB, they are 1.44 kilokibibytes. That's 1440kiB or 1'474'560 bytes. – Third

3
  • 13
    There will be confusion for years to come. In the early days of computing, people spotted that it was clearly much easier to work with factors of 1024 rather than 1000 for computers. Therefore, for decades, the standard SI prefix "kilo" was (and still very often is) used for the non-standard 1024, and it became a de-facto standard in computing. Except that some people still used the SI 1000 anyway. To sort out the mess, "kibi" is now officially defined as a 1024 factor - but it came far too late for an easy transition. "kilo" will be regularly used/abused for 1024 factors for a while yet.
    – user31438
    Commented Nov 15, 2011 at 6:46
  • 1
    It would have helped adoption if they hadn't chosen prefixes that flat out sound stupid; even an acronym requires someone to "mentally aspirate" the word. I simply will never use "kibibyte", etc. Commented Jan 11, 2017 at 14:43
  • Read this: stackoverflow.com/a/69679309/217867
    – LonnieBest
    Commented Oct 22, 2021 at 18:51

3 Answers 3

67
1 KiB (Kibibyte) = 1,024 B (Bytes) (2^10 Bytes)
1 kb  (Kilobit)  =   125 B (Bytes) (10^3 Bits ÷ (8 bits / byte) = 125 B)
1 kB  (Kilobyte) = 1,000 B (Bytes) (10^3 Bytes)

It's the same way with any SI prefix; k (1x103), M (1x106), G (1x109), so, by extension:

1 MiB (Mebibyte) = 1,048,576 B (Bytes) (2^20 Bytes)
1 Mb  (Megabit)  =   125,000 B (Bytes) (10^6 Bits ÷ (8 bits / byte) = 125,000 B)
1 MB  (Megabyte) = 1,000,000 B (Bytes) (10^6 Bytes)

The only ones that are a bit different are the IEC Binary Prefixes (kibi/mebi/gibi etc.), because they are in base 2, not base 10 (e.g. all numbers equal 2something instead of 10something). I prefer to just use the SI prefixes because I find it to be a lot easier. Plus, Canada (my country) uses the metric system, so I'm used to, for instance 1kg = 1000g (or 1k anything = 1000 base things). None of these are wrong or right; just make sure you know which one you're using and what it really equates to.

To appease the commenters:

1 Byte (B) = 2 nibbles = 8 bits (b)

This is why, if you've ever taken a look in a hex editor, everything is split into two hexadecimal characters; each hex character is the size of a nibble, and there are two to a byte. For instance:

198 (decimal) = C6 (hex) = 11000110 (bits)
15
  • 5
    +1 Mentioning that there are 8 bits in a byte may be useful.
    – paradroid
    Commented May 23, 2011 at 15:52
  • 4
    ...and don't forget that a nybble is four bits (or half a byte)!
    – Linker3000
    Commented May 23, 2011 at 15:57
  • 4
    Might also be aware that the lowercase "b" is sometimes used incorrectly to abbreviate "bytes". I see a lot of places just use "bit" in the abreviation such at MB for megabyte and Mbit for megabit and stay away from "b" altogether.
    – James
    Commented May 23, 2011 at 16:02
  • 4
    The prefix kilo is abbreviated k, not K.
    – garyjohn
    Commented May 23, 2011 at 16:19
  • 1
    @Redandwhite Nope, they use base 10 to measure their storage, but our computers use base 2. This accounts for the discrepancy between what's printed on the box and what shows up in the computer. For example, 500GB (box) = 465.7GiB (computer) (and that is how they get you).
    – squircle
    Commented May 23, 2011 at 16:36
11

There are a few basic terms that are simple and easy to understand:

* A bit      (b)   is the smallest unit of data comprised of just {0,1}
* 1 nibble   (-)   = 4 bits (cutesy term with limited usage; mostly bitfields)
* 1 byte     (B)   = 8 bits (you could also say 2 nibbles, but that’s rare)

To convert between bits and bytes (with any prefix), just multiple or divide by eight; nice and simple.

Now, things get a little more complicated because there are two systems of measuring large groups of data: decimal and binary. For years, computer programmers and engineers just used the same terms for both, but the confusion eventually evoked some attempts to standardize a proper set of prefixes.

Each system uses a similar set of prefixes that can be applied to either bits or bytes. Each prefixes start the same in both systems, but the binary ones sound like baby-talk after that.

The decimal system is base-10 which most people are used to and comfortable using because we have 10 fingers. The binary system is base-2 which most computers are used to and comfortable using because they have two voltage states.

The decimal system is obvious and easy to use for most people (it’s simple enough to multiply in our heads). Each prefix goes up by 1,000 (the reason for that is a whole different matter).

The binary system is much harder for most non-computer people to use, and even programmers often can’t multiple arbitrarily large numbers in their heads. Nevertheless, it’s a simple matter of being multiples of two. Each prefix goes up by 1,024. One “K” is 1,024 because that is the closest power of two to the decimal “k” of 1,000 (this may be true at this point, but the difference rapidly increases with each successive prefix).

The numbers are the same for bits and bytes that have the same prefix.

* Decimal:
* 1 kilobyte (kB)  = 1,000 B  = 1,000^1 B           1,000 B
* 1 megabyte (MB)  = 1,000 KB = 1,000^2 B =     1,000,000 B
* 1 gigabyte (GB)  = 1,000 MB = 1,000^3 B = 1,000,000,000 B

* 1 kilobit  (kb)  = 1,000 b  = 1,000^1 b           1,000 b
* 1 megabit  (Mb)  = 1,000 Kb = 1,000^2 b =     1,000,000 b
* 1 gigabit  (Gb)  = 1,000 Mb = 1,000^3 b = 1,000,000,000 b

* …and so on, just like with normal Metric units meters, liters, etc.
* each successive prefix is the previous one multiplied by 1,000



* Binary:
* 1 kibibyte (KiB) = 1,024 B  = 1,024^1 B           1,024 B
* 1 mebibyte (MiB) = 1,024 KB = 1,024^2 B =     1,048,576 B
* 1 gibibyte (GiB) = 1,024 MB = 1,024^3 B = 1,073,741,824 B

* 1 kibibit  (Kib) = 1,024 b  = 1,024^1 b =         1,024 b
* 1 mebibit  (Mib) = 1,024 Kb = 1,024^2 b =     1,048,576 b
* 1 gibibit  (Gib) = 1,024 Mb = 1,024^3 b = 1,073,741,824 b

* …and so on, using similar prefixes as Metric, but with funny, ebi’s and ibi’s
* each successive prefix is the previous one multiplied by 1,024

Notice that the difference between the decimal and binary system starts small (at 1K, they’re only 24 bytes, or 2.4% apart), but grows with each level (at 1G, they are >70MiB, or 6.9% apart).

As a general rule of thumb, hardware devices use decimal units (whether bits or bytes) while software uses binary (usually bytes).

This is the reason that some manufacturers, particularly drive mfgs, like to use decimal units, because it makes the drive size sound larger, yet users get frustrated when they find it has less than they expected when they see Windows et. al. report the size in binary. For example, 500GB = 476GiB, so while the drive is made to contain 500GB and labeled as such, My Computer displays the binary 476GiB (but as “476GB”), so users wonder where the other 23GB went. (Drive manufacturers often add a footnote to packages stating that the “formatted size is less” which is misleading because the filesystem overhead is nothing compared to the difference between decimal and binary units.)

Networking devices often use bits instead of bytes for historical reasons, and ISPs often like to advertise using bits because it makes the speed of the connections they offer sound bigger: 12Mibps instead of just 1.5MiBps. They often even mix and match bits and bytes and decimal and binary. For example, you may subscribe to what the ISP calls a “12MBps” line, thinking that you are getting 12MiBps but actually just receive 1.43MiBps (12,000,000/8/1024/1024).

10
  • 2
    @endolith, not true. First of all, there are indeed, or at least were in the past, some drive manufacturers who use binary units. Second, you missed the point. If they wanted to, they could put 73,400,320 on the drive which would indeed be 70M(i)B instead of 66. They use 70,000,000 because it is cheaper to use that and still call it “70MB“. It’s simple cutting corners and many manufacturers do it. Look at food; instead of 500G, they will put 454G because it equals 1LB. Worse, instead of 454G, they will put 450G and blame the missing 4G on rounding. It’s not a conspiracy, it’s cost-cutting.
    – Synetech
    Commented Feb 8, 2014 at 1:38
  • 1
    Please provide some examples of hard drive manufacturers using binary units.
    – endolith
    Commented Feb 8, 2014 at 14:35
  • 1
    @endolith, this isn’t a history site. Maybe when I do some spring-cleaning and dig up some old drives, I’ll post a photo or something. Otherwise, you can go to a computer-history museum or mom-and-pop computer shoppe and find some old hard-drives if it’s that important to you. These days, most mfgs purposely use labels that make things sound bigger. Like I said, they could make it 73,400,320 bytes to make a 70MB drive if they wanted, but why bother when they can cheap out and still technically call it 70MB? Again, it’s not a conspiracy, it’s common marketing deceptiveness.
    – Synetech
    Commented Feb 8, 2014 at 18:00
  • 2
    I've already looked through the bitsavers archives, and all the examples I find are decimal. This myth that drive manufacturers switched from binary to decimal at some point in order to deceive customers is nuts. They weren't written by marketing departments, but by engineers using the standard units that engineers use. It's logical and sane to call a 70,000,000 byte IBM 3340 drive "70 MB". That's what ''mega-'' has always meant and that's what users would expect. Calling it "66 MB" in some places and "68,359 KB" in other places, like Microsoft Windows does, is insane.
    – endolith
    Commented Feb 8, 2014 at 21:01
  • 1
    @endolith, nobody said that they switched to decimal to deceive, just that they market them in that way on purpose even though they know about the confusion and could make the drive 73,400,320 bytes instead of just 70,000,000 which is not a round number in computers. As for your statement about it “always” having meant that, there is already a thread here about when binary units came into use and it was a long time ago, certainly before computers became consumer products.
    – Synetech
    Commented Feb 10, 2014 at 20:31
-4

Some of the answers are not exact.

Let's first make some notes:

The prefix "kilo" means 1 000. Prefixing "kilo" to anything means 1 000 of that item. The same is true for "mega" or million, "giga" or billion, "tera" or trillion, and so on.

The reason 1 024 exists instead of simply having 1 000 is because of the way in which binary arithmetic works. Binary, as its name suggests, is a base 2 system (it has 2 digits: 0, 1). It can only perform arithmetic with two digits, in contrast to the base 10 system that we use on a daily basis (0, 1, 2... 9), which has ten digits.

In order to get to the number 1 000 (kilo) using binary arithmetic, it is necessary to perform a floating point calculation. This means that a binary digit must be carried each operation until 1 000 is reached. In the base 10 system, 1 000 = 103 (you always raise 10 to a power in base 10), a very easy and quick calculation for a computer to perform with no "remainders", but in the base 2 system, it is not possible to raise 2 (you always raise 2 to a power in base 2) to any positive integer to get 1 000. A floating point operation or lengthy addition must be used, and that takes more time to execute than the integer calculation 210 = 1024.

You may have noticed that 210 = 1 024 is temptingly close to 1 000 and 1 024 to 1 significant figure is 1 000 (a very good approximation), and back when CPU speed was slow as an old dog, and memory was very limited, this was a pretty decent approximation and very easy to work with, not to mention fast to execute.

It is for this reason terms with the "kilo", "mega", "giga", etc., prefixes stuck around with non-exact figures (1 024, 2 048, 4 096, and so on). They were never meant to be exact numbers, they were binary approximations of base 10 numbers. They simply arose as jargon words that "tech" people used.

To make matters even more complicated, JEDEC have created their own standards for units used in semiconductor memory circuits. Let's compare some of the JEDEC units to SI (standard international) units:

Kb = Kilobit (JEDEC, 1 024 bits. Note the upper case 'K' and lower case 'b')
kB = kiloBit (SI, 1 000 bits. Note the lower case 'k' and upper case 'B')

b = bit (JEDEC, note the lower case 'b')
b = ??? (SI does not define the word 'bit' so its use may be arbitrary)

B = byte (JEDEC, 8 bits. Note the upper case 'B')
B = ???? (SI does not define the word "byte" and "B" is used for "Bel" [as in DeciBel])

KB = kilobyte (JEDEC, 1 024 bytes. Note the upper case 'K' and 'B')
kb = kilobyte (SI, 1 000 bytes. Note the use of the lower case 'k' and lower case 'B')

The point is, different places use different prefixes with different definitions. There is no hard and fast rule as to which one you should use, but be consistent with the one you do use.

Due to down voting, allow me to clarify why you cannot make 1 000 in binary by raising it to any positive integer.

Binary system:

+----------------------------------------------------------------------------------+
| 1 024ths | 512ths | 256ths | 128ths | 64ths | 32nds | 16ths | 8ths | 4s | 2s | 0 |
+-----------------------------------------------------------------------------------

Notice that in the binary system, the columns double every time. This is in contrast to the base 10 system which increases by 10 each time:

+--------------------------------------------------------------------------+
| 1 000 000ths | 100 000ths | 10 000ths | 1 000ths | 100ths | 10s | 1s | 0 |
+--------------------------------------------------------------------------+

The first 10 powers in binary (base 2) are:

20 = 1
21 = 2
22 = 4
23 = 8
24 = 16
25 = 32
26 = 64
27 = 128
28 = 256
29 = 512
210 = 1 024

As you can see, it is not possible to raise the binary 2 to any positive integer to reach 1 000.

6
  • 4
    I believe you are incorrect when stating that the number 1000 needs floating-point arithmetic. You can represent any natural number using any sort of numbering system. Actually, the binary equivalent of 1000 is 1111101000. Commented May 4, 2014 at 2:54
  • 1
    Doktoro, please remember we are working in the binary system, or base 2, so you are in fact the one who is incorrect. Here are the first 10 powers of 2 in binary (base 2): 2^0 = 1. 2^1 = 2. 2^3 = 4. 2^4 = 8. 2^5 = 16. 2^6 = 64. 2^7 = 128. 2^8 = 256. 2^9 = 512. 2^10 = 1024. Notice that the answer is exponential, it doubles every time you increase the exponent by 1. So you see, it is not possible to raise a binary 2 (a BINARY 2... not a base ten 2) to any positive integer to make 1 000. I appreciate the down vote all the same. Commented May 4, 2014 at 3:40
  • This doesn't explain that different between a bit and a byte. Actually there is a "hard fast rule" 1 Kb is one thousand bits 1KB is one thousand bytes. There is a huge difference. 8 Kb is 1 KB.
    – Ramhound
    Commented May 4, 2014 at 4:04
  • 4
    Although this statement is correct, you still don't need to perform any kind of floating point arithmetic. You understand powers of 2, so you may also understand that 1111101000 = 2^9 + 2^8 + 2^7 + 2^6 + 2^5 + 2^3 = 1000. Commented May 4, 2014 at 4:08
  • 1
    I don't believe that "They were never meant to be exact numbers, they were binary approximations of base 10 numbers" is true; I think it's just a result of the hardware that was (is) limited to storing ones and zeroes, and hardware addressing using binary registers. Both being base 2; not related to approximating base 10 calculations at all, I feel. Also, I fail to see the point you're making about calculations. It's not like computer output would show 1024 where it actually intended to show 1000, or show 1000 when internally it would be 1024. What calculations are you referring to?
    – Arjan
    Commented May 4, 2014 at 12:32

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .