144
$\begingroup$

Which conversion should I teach to my undergrad students? That 1 kB is 1024 bytes (binary) as everyone learned back in the nineties or the recent industry-led "friendly" conversion that says that 1 kB is in fact 1000 bytes (decimal)?

My immediate feeling goes toward the binary conversion, but when IEC says otherwise and major OSs decide for the decimal conversion (Mac OS X ≥ 10.6 and Ubuntu ≥ 10.10 now use the SI prefixes exclusively to refer to powers of 1000) I'm not so sure anymore.

$\endgroup$
16
  • 28
    $\begingroup$ Please notice that the SI prefix "kilo" is always written with a lowercase "k". Personally, I am used to seeing "kB", even when it strictly isn't a SI prefix. $\endgroup$ Commented Mar 9, 2018 at 19:08
  • 41
    $\begingroup$ xkcd $\endgroup$
    – Kevin
    Commented Mar 9, 2018 at 21:09
  • 7
    $\begingroup$ Ram is soled in kiB, Mib, GiB and hard disks in kB, MB, GB. Both often labelled kB, MB, GB. So it is not always about programming. $\endgroup$ Commented Mar 10, 2018 at 10:06
  • 15
    $\begingroup$ What I find amusing is that the power-of-two version (The one that's clearly what is usually desired) has no justification whatsoever for the use of the "Kilo" prefix--it's just that some arbitrary power of two happens to come fairly close to some arbitrary power of 10 so we ignore the difference for the convenience of being able to say "K" (or "M" or "G") because "0x0200"abyte is too hard to say. $\endgroup$
    – Bill K
    Commented Mar 12, 2018 at 16:08
  • 5
    $\begingroup$ Byte is not an SI unit. The SI unit for quantity is the mole. 1 GB is approximately 1.66 femtomole bytes $\endgroup$ Commented Mar 13, 2018 at 10:38

13 Answers 13

193
$\begingroup$

You should teach both, and you probably want to use the binary unit. When you are talking about the difference, it may be helpful to tell them about how to tell the difference when reading them:

The SI kilo- is k:
$1\ \text{kB (kilobyte)} = 10^{3}\ \text{bytes} = 1000\ \text{bytes}$

While the binary kibi- is Ki:
$1\ \text{KiB (kibibyte)} = 2^{10}\ \text{bytes} = 1024\ \text{bytes}$

I notice that you used KB in your question to refer to both sizes; perhaps you should also point out that KB could be interpreted as either of these prefixes (though Wikipedia suggests it is most often used in place of KiB). In your position, I would suggest clarifying which one you mean if you use this notation.

(While you're going over confusing units, a related difference in writing units is that lowercase b is bits, uppercase B is bytes; an eightfold difference is much more significant than 2.4%.)

$\endgroup$
37
  • 54
    $\begingroup$ Beyond just teaching both, you need to teach that k/kilo can mean either depending on context/who's using it. Just because kibble exists doesn't mean people like or actually use it. $\endgroup$ Commented Mar 10, 2018 at 1:14
  • 7
    $\begingroup$ If you cover bits and bytes, you should also at least briefly mention that a "kilobit" is nearly always 1000 bits (because networking) and a "kilobyte" is nearly always 1024 bytes (because everything-except-for-networking). $\endgroup$
    – Kevin
    Commented Mar 10, 2018 at 7:45
  • 31
    $\begingroup$ 1MiB is ≈5% bigger that 1MB, 1GiB is 7.4% bigger that 1GB, and 1TiB is nearly 10% bigger than 1Tb. $\endgroup$ Commented Mar 10, 2018 at 10:14
  • 17
    $\begingroup$ I was always taught that the base is binary, an 8-bit word is a Byte, a 16-bit word is two Bytes and, following binary convention 1KB is 1024 Bytes, 1MB is 1024 KB, 1GB is 1024 MB, 1TB is 1024 GB - and in binary, the base unit of computing, it makes perfect sense. I have always found the attempted adoption of SI usage an incorrect and unnecessary confusion. That said, as an educator, a student will need to understand the confusion. $\endgroup$
    – Willtech
    Commented Mar 11, 2018 at 4:16
  • 57
    $\begingroup$ "Should I teach that 1 KB = 1024 bytes or 1000 bytes?" Yes. :-) $\endgroup$
    – user541686
    Commented Mar 11, 2018 at 6:01
70
$\begingroup$

You should teach them it's messed up beyond repair, and it's their generation's job to teach the next generation to use the silly-sounding standard prefixes, so that when they finally retire (and the current old-timers are more permanently removed from the argument), there can finally be a consensus.

As the matters currently stand, all the prefixes are unknowable without context. A networking megabit is $10^6$ bits, a filesystem megabyte is $2^{20}$ bytes, a hard drive megabyte is somewhere pretty close to $10^6$ bytes, and a megapixel is "probably a million pixels, who cares."

$\endgroup$
5
  • 1
    $\begingroup$ The consensus seems to be that disk size is the nearest simple approximation lower than n×1000^m. So 2.057×10^12 bytes would be advertised as 2 TB, not 2.1 TB. $\endgroup$
    – l0b0
    Commented Mar 9, 2018 at 20:15
  • 4
    $\begingroup$ I'd note the prefixes rarely (basically never) have their binary meaning with units other than bytes. A megapixel is 1 million pixels, a megabit is a million bits. $\endgroup$
    – cHao
    Commented Mar 12, 2018 at 19:51
  • 1
    $\begingroup$ The filesystem megabyte being $2^{20}$ bytes - maybe. Sometimes in the same OS you'll see "megabytes" (including decimal precision) being $10^6$ in some of the tools and $2^{20}$ in others. Most often in command line tools vs GUI tools, but I know of an OS where even different OS-provided GUI tools disagree on this... $\endgroup$
    – davidbak
    Commented Mar 17, 2018 at 2:31
  • 1
    $\begingroup$ @davidbak is right. It depends on OS also. In 2009, Apple switched to standards-based prefixes for filesystems etc, to match disk drive manufacturers, i.e. GB = 10^9 bytes. eshop.macsales.com/blog/… Ubuntu changed in 2010 wiki.ubuntu.com/UnitsPolicy When will Windows catch up with reality? $\endgroup$
    – nealmcb
    Commented Mar 10, 2021 at 21:11
  • $\begingroup$ @nealmcb I'm glad I'm not the only one who's been ticked off by this. I've always found it odd when Windows displayed something like 100.71 MB (105,600,322 bytes) and was never able to figure out why this was the case until I learned about the whole fiasco with binary vs. decimal units. $\endgroup$
    – user14314
    Commented Mar 12 at 11:11
56
$\begingroup$

Actually, you need to teach them both so that they are warned that the usage is not consistent. Then you can choose one as a standard in your course going forward.

Which you choose depends a bit on what you are teaching. If it is how to evaluate hard drives, etc. then $K = 1000$ works now. For most programming, however, $K = 2^{10} = 1024$ is probably best.

Sadly, the dual meanings is likely due to manufacturers trying to avoid confusion in the minds of unsophisticated customers.

$\endgroup$
21
  • 4
    $\begingroup$ Kilobyte was coined far before the 1,000 byte kilobyte in 1998. IEC really just made a mess of things. $\endgroup$
    – phyrfox
    Commented Mar 9, 2018 at 19:09
  • 51
    $\begingroup$ Yes, but kilo = 1000 goes back to 1795: etymonline.com/word/kilo- So non-geeks have some precedence here, perhaps. But more important: If you teach them just the one thing as the "correct thing" you are setting them up for confusion later. The world is messy. Teachers shouldn't pretend it isn't. Being dogmatic isn't very helpful. $\endgroup$
    – Buffy
    Commented Mar 9, 2018 at 19:12
  • 6
    $\begingroup$ Also kB/KB doesn't help with MB, GB, TB which a) are much more relevant b) have much bigger differences. $\endgroup$ Commented Mar 10, 2018 at 2:20
  • 30
    $\begingroup$ "Sadly, the dual meanings is likely due to manufacturers trying to avoid confusion in the minds of unsophisticated customers" More likely it is advertisers wanting their product to sound larger than it really is. Why advertise a 3TB hard drive using the correct 1TB=1024*1024*1024*1024 bytes when you can advertise a 3.3TB hard drive using the lawyer approved 1TB=1000*1000*1000*1000 bytes. 3.3 is bigger than 3, right? $\endgroup$
    – Readin
    Commented Mar 10, 2018 at 5:10
  • 28
    $\begingroup$ @Readin Or, as I see it more often, a 3TB drive that actually has 2.7TB of total storage. $\endgroup$
    – anon
    Commented Mar 10, 2018 at 5:43
24
$\begingroup$

The difference between providing your students with a proper discussion of this topic, and simply teaching them one or the other, is the difference between being a real educator and being a reciter of factoids.

If there is no single correct definition of KB for you, then why would you instill something different in your students? The answer to your question is thus obvious in its formation. Your responsibility as a teacher is to convey an understanding of the issue, not to boil it down to one-or-another fact that you know to be less-than-true.

$\endgroup$
2
  • 7
    $\begingroup$ I agree but before providing a proper discussion with my students, I'm providing a proper discussion here which was my intention in the first place (instead of getting simple one or the other answers). $\endgroup$
    – alves
    Commented Mar 11, 2018 at 20:32
  • $\begingroup$ it sucks when nobody knows what measuring unit we're using in the system that we've already built and it somehow already works $\endgroup$ Commented Oct 5, 2023 at 18:44
18
$\begingroup$

Yes I agree with other answers, teach both, and also note the similarity.

The difference

  • $\text{ki} = 1024 = 2^{10}$
  • $\text{k} = 1000 = 10^3$
  • $\text{k}, \text{M}, \text{G}, \text{T}, \text{P}$ is sometimes used to mean $\text{ki}, \text{Mi}, \text{Gi}, \text{Ti}, \text{Pi}$

The similarity

  • $1 = \text{k}^0$ and $1 = \text{ki}^0$
  • $\text{k} = \text{k}^1$ and $\text{ki} = \text{ki}^1$
  • $\text{M} = \text{k}^2$ and $\text{Mi} = \text{ki}^2$
  • $\text{G} = \text{k}^3$ and $\text{Gi} = \text{ki}^3$
  • $\text{T} = \text{k}^4$ and $\text{Ti} = \text{ki}^4$
  • $\text{P} = \text{k}^5$ and $\text{Pi} = \text{ki}^5$
  • $\text{E} = \text{k}^6$ and $\text{Ei} = \text{ki}^6$

Quick maths

$64\text{ bits} = ( 6 \times 10 + 4 ) \text{ bits} = \text{ki}^6 \times 2^{4} = 16\text{ Ei addresses}$

This has some similarity and some difference with the base 10 system that they (should) know. First we break it into blocks of 10 (instead of 3), the remainder we just convert to base 10, the rest is the same.

Where used (mainly)

It is important to show where the 2 systems are used. While some answers say that they have never seen the $1000$ based SI system used in computing. It turns out that the SI system is used a lot, depending on what is being measured.

  • IEC 60027-2 A.2 and ISO/IEC 80000 e.g. $\text{ki}$:
    • measures of primary memory: RAM, RAM, cache.
    • measure of file sizes, partition sizes, and disk sizes within OS.
  • SI units e.g. $\text{k}$:
    • measures of secondary memory devices: hard-disks, SSDs.
    • network speeds.
    • CPU / memory / bus speeds.
    • all other speeds.

However the use of symbol $\text{ki}$ is at this time not always used.


see also https://en.wikipedia.org/wiki/Binary_prefix

$\endgroup$
12
  • 6
    $\begingroup$ This answer begs the question. $\endgroup$
    – prl
    Commented Mar 11, 2018 at 4:26
  • $\begingroup$ @prl If you are meaning dodge the question (answering a different question), then you are partly correct. I am trying to extend on other answers. And to give some advice on “How”, where the question was “Which”. $\endgroup$ Commented Mar 11, 2018 at 19:33
  • 1
    $\begingroup$ IMO this is the best answer, but it could be slightly improved by explicit mention of style. I.e. in the same way that there are different styles for citing papers, or for delimiting lists (vide Oxford comma), there are different styles for formatting numbers. In an IEC publication post-2000 you can assume that house style will be SI / *bi. Other organisations / publishers may use other styles. $\endgroup$ Commented Mar 12, 2018 at 9:56
  • $\begingroup$ Pretty good answer. Two nitpicks: 0) For all the prefixes (k, M, Mi, Gi, etc.), use roman type, not italic; I suggest using \text{}. 1) Ki must have a capital K. $\endgroup$
    – Nayuki
    Commented Mar 13, 2018 at 5:02
  • $\begingroup$ @Nayuki “The first letter of each such prefix is therefore identical to the corresponding SI prefixes, except for "K", which is used interchangeably with "k", whereas in SI, only the lower-case k represents 1000.” — en.wikipedia.org/wiki/Binary_prefix $\endgroup$ Commented Mar 13, 2018 at 11:32
12
$\begingroup$

I've worked in IT professionally since the mid-1980s. My current practice is to write whichever of e.g. KB or KiB that I mean at the time, with KB meaning $10^3$ and KiB meaning $2^{10}$. If I'm talking about the RAM in a machine I'll write e.g. "64MiB" and if I'm talking about the as-manufactured and as-marketed size of a disk drive I'll write "1TB." I am not, however, prepared to use words like "mebibyte" in conversation. Maybe one day I'll change my verbal abbreviations from e.g. "meg" to "meb" but I'm not there yet.

$\endgroup$
11
  • 5
    $\begingroup$ I've never seen, in a similar timeframe, MiB etc. used for RAM. KB/MB/GB/TB as concerned with RAM always is 1024-based. $\endgroup$
    – AnoE
    Commented Mar 10, 2018 at 8:52
  • 8
    $\begingroup$ If you're using upper-case K for kilo, you're wrong. (I have seen people mixing up millimetre with megamolar.) $\endgroup$
    – TRiG
    Commented Mar 10, 2018 at 20:24
  • 2
    $\begingroup$ I think I'd sooner say/write "binary megabyte" for MiB than "mebibyte", but the abbreviation would be OK. $\endgroup$ Commented Mar 12, 2018 at 21:05
  • $\begingroup$ @MontyHarder: From a pronunciation standpoint, how about em-byte? $\endgroup$
    – supercat
    Commented Mar 14, 2018 at 15:59
  • $\begingroup$ @supercat "em-byte" sounds like an abbreviation of megabyte. It therefore doesn't resolve the ambiguity the way MiB does. I find MiB a useful abbreviation (the "i" infix represents "b_i_nary"), but the word "mebibyte" itself is not coming out of my mouth smoothly, if at all. $\endgroup$ Commented Mar 14, 2018 at 19:23
7
$\begingroup$

The basic confusion is in the notation at the KB (base 2 derived) vs kB (SI unit) unit level, and it is helpful to understand the origin of the use of the base 2 derived unit.

A computer is a binary machine.

At the basic level, memory addressing is binary. Usually, at the programmatic level, the addressing is keyed in hexadecimal format (it was originally binary); however, hexadecimal is also base 2 derived (it is base 16 or, 24) and so is directly compatible.

Beginning at the KB level for communicating understanding here is useful since the concepts of base 2 derived units have existed since before MB was in common usage (no differentiation in prefix from SI unit).

On a memory controller IC, if you imagine that address selectors are a row of switches (binary logic gates) and depending on how they are switched you get the memory read from a specific address on the data lines. The data is stored and returned as bytes.

There has always been a limited number of address lines available to address memory, and it so happens that using binary complete address sets for a given number of bits of addressing are base 2 numbers. So, on a 4KB machine, there are 12 address lines representing addresses 0 through 4095 (4096 bytes). These 12 address lines are corresponding to the 111111111111 addresses possible in binary, 0FFF in hexadecimal or, 4096 bytes in decimal. It would not be logical to limit address mapping to 4000 bytes for the sake of decimal convention when there are 12 addressing bits available.

This logic followed initially to hard disks also, where blocks are groups of bytes accessed by address, however (and I have not checked), I do hear that perhaps hard disk vendors find it less critical to use 'round addressing' formats, particularly considering the following.

All standard values in computer terminology are base 2 derived, although, for marketing purposes, some vendors 20MB hard disk may not be as large as some keeping the convention. It is convenient to slap 20MB on something even if it does not contain as many blocks and is easier to manufacture because there is less data density required.

Early IDE hard disks (there were other earlier systems before IDE), before the Logical Block Addressing (LBA) system was introduced, used to be configured by cylinders, heads and, sectors (CHS). The entire addressing system was binary, and even standard Unix utilities used 1024 byte blocks for display.[1] Standard tools like Conky still use base 2 for display of RAM and HDD information, although, it uses the GiB style format to avoid confusion. Later, the LBA addressing system allowed for logical mapping of the CHS format as hard disk size grew, however, LBA simply applies the CHS format addressing internally in the hard disk's onboard controller and allows the OS (and the programmer) to just consider the logical blocks.

The base 2 logic follows through to larger numbers, for example, 1111111111111111111111111111111 bytes is 2GB in standard usage or 7FFFFFFF bytes in hexadecimal. It is only in decimal where this looks untidy as 2,147,483,647 bytes, but the underlying technology and conventions are not decimal. Computers are not decimal machines; they are binary machines.

Network addressing also uses binary masks on every one of millions of data packets every second to ensure correct routing but, it is a long time since the data portion of a network packet has resembled a base 2 number. Probably the outermost layer of the packet still does {conjecture}.

You will no doubt need to mention that there is confusion especially when it comes to marketing of products as being a particular size, and that there are some programitc implementations for display of values using SI units (it is no longer more inconvenient or slower {actually, it is probably still slower, but on modern computers it is no longer noticable} for computer programmers to implement decimal, particularly for display) but, there can be no doubt about computer usage that the correct answer is the base 2 convention.

1024KB is the JEDEC 100B.01 standard meaning that 1KB is 1024 bytes.

rel:
[1] Wikipedia - Cylinder-head-sector (CHS) - https://en.wikipedia.org/wiki/Cylinder-head-sector

This question has been extensively explored.

SuperUser - Size of files in Windows OS. (It's KB or kB?) - https://superuser.com/questions/938234/size-of-files-in-windows-os-its-kb-or-kb

Most OS's and the vast majority of devices that deal with memory/storage use the prefixes K for Kilo to mean 1024 bytes, so when I get RAM that says it's a 4GB module, I know it's 4 Gibi-Bytes (4*1024*1024*1024) and not Giga-Bytes (4*1000*1000*1000).


Quora - Where do we use 1 kB = 1000 bytes, 1 MB = 1000 kB, 1 GB = 1000 MB, 1 TB = 1000 GB? And where do we use 1 KB = 1024 bytes, 1 MB = 1024 KB, 1 GB = 1024 MB, 1 TB = 1024 GB? - https://www.quora.com/Where-do-we-use-1-kB-1000-bytes-1-MB-1000-kB-1-GB-1000-MB-1-TB-1000-GB-And-where-do-we-use-1-KB-1024-bytes-1-MB-1024-KB-1-GB-1024-MB-1-TB-1024-GB

The second idea was formulated by Computer industry 1KB = 1024 bytes 1MB = 1024 KB 1GB = 1024 MB Notice I am using capital B and not small b, and capital B implies bytes The small b should not be used This is the case always and is true for things related to computers


The first idea was formulated by Tele-communication industry and is applicable not for data size (bits and bytes) but for data speed (bits per seconds or bytes per second) 1Kbps = 1000 bps (bits per second) 1Mbps = 1024 Kbps 1Gbps = 1024 Mbps Notice I am using small b and not capital B, and small b implies bits The capital B should not be used This is the case always and is true for things related to data transmission

$\endgroup$
1
6
$\begingroup$

I am adding a second answer to clarify some issues with the question and to clear the obvious confusion in the answers.

  1. The question incorrectly states that the linked IEC communication recommends KB to mean 1000. The link refers to 'kilo' only.

  2. kB may mean the SI kilobyte, I.e. 1000 bytes

  3. KB does and has always meant 1024 bytes.

Number 3 is essentially the only useful definition in software engineering.Note that the K is capitalized.

There is also KiB which is equivalent to KB. Note that the kilo word is always represented by small k. For OP to teach KB as 1000 ever is always flat wrong.

The above does not apply to MB and higher. There the usage is ambiguous and depends on context.

$\endgroup$
7
  • 9
    $\begingroup$ Note that while KB as 1000 may be flat wrong, it's also necessary to teach that a lot of people do this wrong, and thus students must never trust KB to mean 1024 without further knowledge of the context. $\endgroup$
    – Peter
    Commented Mar 10, 2018 at 23:39
  • 1
    $\begingroup$ @Peter Agreed 100% A broad discussion of history and context in a way that is interesting and entertaining would help differentiate a mediocre from a decent education. $\endgroup$
    – Sentinel
    Commented Mar 11, 2018 at 8:33
  • 2
    $\begingroup$ In what way is number 3 "the only useful definition"? $\endgroup$ Commented Mar 12, 2018 at 4:19
  • 1
    $\begingroup$ @immibis - it was said to be "the only useful definition in software engineering". Because of the binary nature of computer architecture and software, it's probably correct. Outside of discussions about computers and particularly software, it is most likely not correct. $\endgroup$ Commented Mar 14, 2018 at 0:34
  • 4
    $\begingroup$ @KevinFegan: The only situations I can think of where using an uppercase K for 1000 should not be viewed as being simply wrong would those where a lowercase "k" is unavailable, e.g.some situations involving signage or limited character sets. $\endgroup$
    – supercat
    Commented Mar 14, 2018 at 16:01
5
$\begingroup$

Teach them that without context, you don't know because there most certainly are people out there who will use k to mean 1000 and others who will use k to mean 1024. Which is right is not relevant because both usages are out there. This leaves any use of "k" with bytes ambiguous unless whoever gave the number also specified what they meant.

For this reason I'd recommend that you teach that when giving a value in bytes, always use an IEC prefix like Ki instead. 10 kB is ambiguous, 10 KiB is not.

We can declare certain usages are "wrong" all we want, and I'm not saying that is necessarily unjustified, but that doesn't make those usages go away.

$\endgroup$
5
  • $\begingroup$ Not seen many decimal based computers recently so Kb when referring to computer isn't ambiguous $\endgroup$ Commented Mar 9, 2018 at 22:05
  • 3
    $\begingroup$ @Neuromancer Whether it's ambiguous or not has nothing to do with decimal based computers... $\endgroup$ Commented Mar 10, 2018 at 1:57
  • $\begingroup$ @smithkm Show me where k small k is ambiguous. $\endgroup$
    – Sentinel
    Commented Mar 10, 2018 at 7:38
  • 1
    $\begingroup$ @Neuromancer Kb means... Maybe kb. Oh, the speed of telephone modems that were common until the early 2000s was given in kb/s. $\endgroup$ Commented Mar 10, 2018 at 20:40
  • $\begingroup$ @rexkogitans It was Kbps for Kilobits per second. Of course some networking utilities would scale it to bytes and that would be KB/s (usually something like that) but the modems were Kbps just like now it might be Mbps or Gbps (and so on). Or if you're extremely unlikely yes Kbps. (Perhaps some wrote it as kbps though) $\endgroup$
    – Pryftan
    Commented Mar 14, 2018 at 21:22
2
$\begingroup$

Teach them both but focus on 1024 in problems. They'll need to convert bandwidth, etc in networking and other courses.

Converting using 1000 is easy but 1024 is tricky so focus on that, the knowledge will help them in computer architecture, assembly and networking courses. They'll have to work with it someday so get them ready

$\endgroup$
1
  • $\begingroup$ @immibis '@Lynob If you'd like to continue this discussion, plase do so in chat. But, if you simply believe the answer is incorrect, downvote and move on. $\endgroup$
    – thesecretmaster
    Commented Mar 12, 2018 at 23:51
1
$\begingroup$

The other answers all give solid reasons for teaching that both exist and how badly messed up the current situation is. This is important, but it does not clarify what the students should prefer to use themselves. This answer focuses on the practical side of what the students can do; after learning about the current situation from the other answers.

Assume the worst-case

As with all uncertainty in computing, the safest option is always to assume the worst-case scenario. That is, to minimise the chance that an incorrect assumption will cause bugs.

In this situation, the following can be applied to cover your bases:

  • Assume the amount of resource you have is in multiples of 1000 Bytes.

  • Assume resources used by 3rd party libraries etc. is in multiples of 1024 Bytes.

  • Provide any figures for resources you use as multiples of 1000 Bytes.

These three assumptions ensure that:

  • At worst, you will think you have less resources than you actually do. For example, assuming 4kB RAM means "4000 Bytes" could mean you plan for having 96 fewer Bytes than you actually do. But it means you will never plan for having 96 Bytes more than you actually do.

  • At worst, you will assume the library that said it uses 2kB RAM meant it uses 48 Bytes more memory than it actually does (assume it meant 2048, not 2000). But you will never plan for it using 48 Bytes less RAM than it actually does.

  • At worst, 3rd parties will assume your program uses more resources than it does, by assuming you meant 1024 Bytes per kB not 1000. But you will never accidentally lead somebody to think it uses less than it actually does.

Of course, it's not ideal to have to "lose" resources unnecessarily. But in the general case, the small difference is unlikely to be enough (especially as a student) to make their project unfeasible. In those specific cases where it does, they should already be measuring the exact footprints of everything and not assuming the sizes of anything from documentation alone.

The benefit however, is that your assumptions about what somebody else meant by "2kB" will not hurt you when they're wrong. Which in this specific case, and as a general lesson to your students - I feel is important.

$\endgroup$
0
$\begingroup$

“Which conversion should I teach to my undergrad students?”

Are these engineering related undergrads? If yes, I'd with 1024, based on binary math as that is what engineering is based on.

You can count off the bits on your fingers:

  • $1$ finger = $2$ states, 0 and 1.
  • $2,4,8,16,32,64, 128, 256, 512, 1024$. The highest decimal value that can be realized is 1 less, while the number of states represented is $2^x$ list.
  • $2^1 -1 = 1$. Therefore 0,1
  • $2^2 - 1 = 3$. Therefore 0,1,2,3
  • $2^3 - 1 = 7$. Therefore 0,1,2,3,4,5,6,7
  • etc. up to $2^8 - 1 = 255$. Therefore 256 states, from 0 to 255.

Manufacturers may advertise as 2.2TB, but the operating system will report it as 2TB, or maybe even 2TB usable.

$\endgroup$
3
  • 3
    $\begingroup$ Incorrect, unfortunately. Different operating systems report differently. Specifically the fruity ones. $\endgroup$
    – Peter
    Commented Mar 10, 2018 at 23:43
  • $\begingroup$ Incorrect, fortunately. Decent operating systems report sizes correctly, with GB = 1 billion bytes. The fruity ones started it. $\endgroup$
    – gnasher729
    Commented Mar 11, 2018 at 18:02
  • 4
    $\begingroup$ @gnasher729: Given that allocation units are multiples of 512 bytes on just about every operating system, reporting disk utilization in units of 1024 bytes makes a lot more sense to me than reporting in base ten units. $\endgroup$
    – supercat
    Commented Mar 13, 2018 at 23:06
-1
$\begingroup$

In my 26 years as a professional software engineer I have never encountered KB to mean anything other than 1024.

Teach them whatever definitions you like and make sure that they know that 1024 is the only useful one.

$\endgroup$
1
  • $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. Discussion is for chat, not for comments, and any further discussion in the comments will be deleted. $\endgroup$
    – thesecretmaster
    Commented Mar 12, 2018 at 17:30

Not the answer you're looking for? Browse other questions tagged or ask your own question.