29
$\begingroup$

Why Do Computers Use the Binary Number System (0,1)? Why don't they use Ternary Number System (0,1,2) or any other number system instead?

$\endgroup$
8
  • 10
    $\begingroup$ This is a question about electrical engineering. Apparently binary gates are easier to implement. IIRC some ternary-based computer had been built at some point. $\endgroup$ Commented Jun 13, 2014 at 6:16
  • 7
    $\begingroup$ What research have you done? When I type the title of your question into Google, I get back search results that provide several answers to your question. Also, the Wikipedia article on binary numbers and binary code has a short explanation. We expect you to do a significant amount of research before asking here, and it looks to me like you haven't done even basic research before asking. Searching Google and Wikipedia is a bare minimum. $\endgroup$
    – D.W.
    Commented Jun 13, 2014 at 21:30
  • $\begingroup$ Larger bases did not turn out to be useful overall. $\endgroup$
    – Raphael
    Commented Jun 13, 2014 at 21:40
  • $\begingroup$ @Raphael: Ternary did $\endgroup$ Commented Sep 10, 2014 at 19:29
  • 2
    $\begingroup$ I'm going to put this as a comment because there's already an accepted answer. It is extraordinarily difficult to build electronic devices that reliably discriminate among ten values because of manufacturing tolerances. It is relatively easy to build electronic devices that discriminate between two values. So, the short answer is that computers use binary representation for reliability. I've written a more detailed answer for those who may care: bbrown.kennesaw.edu/papers/why_binary.html $\endgroup$
    – Bob Brown
    Commented Jun 8, 2015 at 22:38

6 Answers 6

32
$\begingroup$

Since we're in Computer Science, I'll answer this way: they don't.

What do we mean by a "computer?" There are many definitions, but in computer science as a science, the most common is the Turing machine.

A turing machine is defined by several aspects: a state-set, a transition table, a halting set, and important for our discussion, an alphabet. This alphabet refers to the symbols which the machine can read as input, and that it can write to its tape. (You could have different input and tape alphabets, but let's not worry about that for now.)

So, I can make a Turing machine with input alphabet $\{0,1\}$, or $\{a,b\}$, or $\{0,1,2\}$, or $\{\uparrow,\downarrow\}$. It doesn't matter. The fact is, I can use any alphabet I choose to encode data.

So, I can say that $0001001$ is 9, or I can say that $\uparrow \uparrow \uparrow \downarrow \uparrow \uparrow \downarrow$ is 9. It doesn't matter, since they're just symbols we can distinguish.

The trick is that binary is enough. Any sequence of bits can be interpreted as a number, so you can convert from binary to any other system and back.

But, it turns out unary is enough too. You can encode 9 as 111111111. This isn't particularly efficient, but it has the same computational power.

Things get even crazier when you look into alternate models of computation, like the Lambda calculus. Here, you can view numbers as functions. In fact, you can view everything as functions. Things are encoded not as bits, 0s and 1s, but as closed mathematical functions with no mutable state. See the Church numerals for how you can do numbers this way.

The point is that, 0s and 1s is a completely hardware specific issue, and the choice is arbitrary. What encoding you're using isn't particularly relevant to computer science, outside of a few subfields like operating systems or networking.

$\endgroup$
8
  • $\begingroup$ What about input encoding in 2-counters machines. It seems to matter. Are you sure you can dismiss so radically encoding issues? And I would hardly agree that complexity is a non-issue; but is lambda calculus a proper way to address it? $\endgroup$
    – babou
    Commented Jun 19, 2014 at 21:45
  • $\begingroup$ I'll admit that there are complexity problems when you use unary. But the choice of binary vs ternary or anything like that is somewhat arbitrary. It's not like the choice of encoding doesn't matter, but there's not some law dictating why we use a particular one. It's dictated mostly by either convenience or by hardware requirements, which are somewhat outside of computational science. $\endgroup$ Commented Jun 19, 2014 at 22:28
  • 1
    $\begingroup$ "So, I can make a Turing machine with input alphabet". I think you should write "tape alphabet" here instead of "input alphabet". The interesting part is what is used for computation and not for input. Furthermore I disagree with unary being enough. A TM with unary tape alphabet is almost useless, because the tape is constant. $\endgroup$
    – Simon S
    Commented Jun 20, 2014 at 7:17
  • $\begingroup$ Regarding the last sentence: design and study of computer hardware and architecture are also part of computer science. $\endgroup$
    – Kaveh
    Commented Aug 11, 2014 at 16:48
  • 2
    $\begingroup$ You may want to add the point that going from unary to binary cuts down the size of your numbers to their logarithm, which is a serious improvement, while going further doesn't gain as much (only a linear factor). $\endgroup$ Commented Feb 23, 2015 at 22:49
22
$\begingroup$

Some other things to consider:

Part of the reason for using a binary number system is that it's the lowest-base number system that can represent numbers in logarithmic, rather than linear, space. To uniquely distinguish between $n$ different numbers in unary, the average length of representations must be proportional to at least $n$, since there is only one string of length $k$ where $k < n$; $1 + 1 + ... + 1 = n$. To uniquely distinguish between $n$ different numbers in binary, the average length of representations must be proportional to at least $\log_2 n$, since there are $2^k$ binary numbers of length $k$; $1 + 2 + ... + \frac{n+1}{2} = n$. Choosing a larger base improves on the space requirement by a constant factor; base 10 gets you $n$ numbers with an average representation length of $\log_{10}n$, which is $\log_{10}2 \approx 0.3$ times the average length of a base two representation for all $n$. The difference between binary and unary is much greater; in fact, it's a function of $n$. You get a lot by choosing binary over unary; you get much less by choosing a higher base, by comparison.

There is some truth to the idea that it's easier to implement digital logic if we only have to distinguish two states. Electric signals are analog and, as such, can be interpreted to represent as many discrete states as you'd like... but you need more precise (hence expensive and finicky) hardware to reliably distinguish more states over the same range. This suggests choosing as low a base as you can.

Another potentially important consideration is that logic has classically been understood to involve two distinct values: $true$ and $false$. Now, we have fancier logics than this, but a lot of mathematics and science still rests on pretty foundational notions. When you consider that computers are used to compute, and that logic is important for computation, it suggests having good support for at least two distinct states... but logic doesn't really require more than that.

$\endgroup$
9
$\begingroup$

One of the big reasons that most computer circuits use two states is that the quantity of circuitry necessary to distinguish between n different voltage levels is roughly proportional to n-1. Consequently, having three discernible states would require twice as much circuitry per signal, and having four would require three times as much. Tripling the amount of circuitry while only doubling the amount of information would represent a loss in efficiency.

Note that there are some places in computers where information is stored or communicated using more than two states per element. In a flash memory array, hundreds or thousands of memory cells may be serviced by one set of level-sensing circuitry. Using four levels per cell rather than two when storing a certain amount of information might more than triple the size of the level-sensing circuitry, but would cut by half the number of memory cells required. When communicating over 100-base-T or faster Ethernet, the cost of the circuitry necessary to detect multiple signal levels on the cable will likely be dwarfed by the cost of either having to use a cable with more wires or use cables that can handle more signal transitions per second without an unacceptable level of distortion.

$\endgroup$
9
$\begingroup$

There do exist quantum computers in research labs that use q-bit as the basic unit of information that can be both 0 and 1 simultaneously.
http://en.wikipedia.org/wiki/Quantum_computer

There have also been ternary quantum computers built as per this reference http://en.wikipedia.org/wiki/Ternary_computer

So, It is indeed possible to build alternative computing devices that do not rely on the binary number system. Fiber optic systems for example use 0 for dark and two different orthoganal polarizations of light as 1 and -1.

The reason why I mention these things is because I want to show that although binary numbers are sufficient for computing, there are alternative number systems that can be used for computing.

The binary number system is nice in these sense we can encode all integers $x \in \mathbb{Z}$ by using radix representation of numbers. http:// en.wikipedia.org/wiki/Radix These values can represent the ASCII code A=0x41=01000001, or the value could represent a machine instruction nop=0x90=0x10010000.

$\endgroup$
3
  • 3
    $\begingroup$ But they are still using a binary system, in quantum computing the state of superposition is not a third possible value. Also throwing out a quantum computing example is not helpful to the question asked. $\endgroup$
    – lPlant
    Commented Jun 13, 2014 at 13:29
  • $\begingroup$ I didn't know about this.. $\endgroup$
    – smali
    Commented Sep 12, 2014 at 4:26
  • $\begingroup$ "q-bit as the basic unit of information that can be both 0 and 1 simultaneously." This is nonsense. Qubits are fundamentally different from normal bits, in that they represent a non-discrete (complex) value. They are incomparable for all practical purposes. $\endgroup$
    – Discrete lizard
    Commented Apr 5, 2018 at 8:19
5
$\begingroup$

At the heart of the digital computers processing power is a transistor, which works like a switch. By raising the current at at the "gate" of the switch, it allows current to flow between the "collector" and "emitter" - the switch is turned on. The transistor will be designed to operate in one of two modes - fully on or fully off ('saturated') - with a clear division of what those states are. The transistor can switch between the two states quickly, will remain in the state with very limited errors.

This circuitry forms the basis for logic devices, such AND, NAND, OR, XOR and other functions. The NAND function being the most basic of the building blocks. These logic devices are assembled to provide processors which remain in a predictable state, and lots of transistors can be packed in a small space to provide the functionality needed.

A transistor can manage multiple, or varying states, but when operating in that manner they do not produce conventional "digital" computers - they do not tend to stay in a predictable state and they are prone to interference, saturation, osculation, etc - so they have limited applications in terms of computational abilities. Op-amps could be considered analog computers.

$\endgroup$
-3
$\begingroup$

We only use binary(1,0) because we currently do not have the technology to create "switches" that can reliably hold more than two possible states. (Quantum computers aren't exactly on sale at the moment.) The binary system was chosen only because it is quite easy to distinguish the presence of an electric current from an absense of electric current, especially when working with trillions of such connections. And using any other number base in this system ridiculous, because the system would need to constantly convert between them. That's all there is to it.

$\endgroup$
1
  • 2
    $\begingroup$ This is true but isn't it all already included in the existing answers? $\endgroup$ Commented Oct 23, 2014 at 12:02

Not the answer you're looking for? Browse other questions tagged or ask your own question.