49

Modern general-purpose computers typically have a 64-bit word size, but looking back in time, we see narrower CPUs. In the early 80s, the 68000 dealt with 32-bit addresses but the ALU was only 16 bits (so a single 32-bit addition took a pair of ALU operations). The 8086 dealt with 20-bit addresses but the ALU was, again, only 16 bits. Going back to the 70s, the 6502 dealt with 16-bit addresses but the ALU was only 8 bits; the Z80 dealt with 16-bit addresses but the ALU was only 4 bits. (Admittedly part of the motive for going that narrow was to come up with an obviously different implementation to avoid being sued by former employer Intel. But still.)

The reason for this is obvious enough: going back in time, logic gates become more expensive; you can't afford to build such a wide CPU. Also memory is expensive; you have less of it; you don't need such wide addresses.

And then going back to still earlier decades, and we encounter:

  • IBM 650. Word size 10 decimal digits. (Depending on how you reckon it, this is equivalent to somewhere between 33 and 40 bits.)

  • Burroughs 205. 10 decimal digits.

  • IBM 704. 36 bits.

  • DEC PDP-10. 36 bits.

Why so wide?

It was certainly not for the memory addressing reasons that motivated the increase in the 90s-00s from 32 to 64 bits. Indeed, 16 bits would have sufficed for the memory addressing needs of all those computers.

Clearly other things equal a wide ALU is faster than a narrow one (basically, it's the difference between being able to perform an operation in one clock cycle versus several). And it's also presumably more expensive. What factors go into deciding whether it's worth spending the money for the extra speed?

Clearly the further back we go, the more expensive is each logic gate. I would've expected narrow CPUs going back that far, but this is not what's happening.

Another factor is speed of supporting components, particularly memory. There's no point spending money on a CPU that can crunch data faster than the memory can feed it. So what sort of memory speed did these computers enjoy?

https://en.wikipedia.org/wiki/IBM_650 says

A word could be accessed when its location on the drum surface passed under the read/write heads during rotation (rotating at 12,500 rpm, the non-optimized average access time was 2.5 ms).

2.5 milliseconds. 2500 microseconds access time. Okay, you could do better than that by carefully placing instructions near where the head would be when the previous instruction was complete, but still, that looks to me like a memory system much less, not more, able to keep up with a wide CPU, compared to the semiconductor memories of later decades, something that would again intuitively make a wide CPU less, not more worthwhile.

So why did the first and second-generation computers have such wide CPUs?

17
  • 14
    One thing about the drum memory access time: that's the time between accesses of a word, so the wider the word, the more data you can get off the drum in the same amount of time. Commented Sep 6, 2020 at 7:53
  • 4
    I think you're mixing the (memory / register) word size with the ALU width in your question. The Z80, for instance, has 8-bit words, but a 4-bit ALU. Many early computers had large words (for the reasons dirkt gives below), but bit-serial ALUs. When memory is slow, large words make sense (because you read more per read operation); when gates are expensive, small ALUs make sense (because they cost less). Commented Sep 6, 2020 at 10:59
  • 8
    Many of the 36-bit word machines were oriented to number-crunching floating point. 36-bit floating point numbers just barely hold the equivalent of 10 digits of decimal precision after the decimal point. This is equivalent precision to typical desk calculators of the time.
    – RETRAC
    Commented Sep 6, 2020 at 13:02
  • 5
    CDC 6x00 series machines: 60-bits
    – Hot Licks
    Commented Sep 7, 2020 at 1:46
  • 3
    @SingleMalt: Another thing that enabled it was bit-serial processing. Depending upon what the bottlenecks are with program execution speed, doubling the word size might require nothing more than adding an extra bit to one counter and doubling the speed of part of the system that has a fairly short propagation delay.
    – supercat
    Commented Sep 8, 2020 at 20:55

10 Answers 10

79

And if you go back further, e.g. to the ENIAC, you'll see a word size of 40 bits.

And if you go back even further, to mechanical calculators, you'll see word sizes determined by the number of decimal digits they can represent.

And that explains the approach: Computers originally were meant to automate calculations. So you want to represent numbers. With enough digits you can do meaningful calculations.

Then you decide if you want a binary or a decimal representation.

That's how you end up with something like 10 decimal digits, or between 33 and 40 bits.

Then you discover that this is too many bits for instructions. So you stuff several instructions into one word (or you have lots of space for an address in the instruction).

And you think about representing characters. Which have 6 bits for teletypes. So multiples of 6 make a lot of sense.

Then you want to make the computers cheaper. If you are DEC and have a 36 bit machine, and you are using octal, 3*4 = 12 bits are an obvious choice, because that's a fraction of 36 bits. So you get the PDP-8.

And further on, you get the PDP-11, microcomputers, and word sizes of multiple of 8 bits.

So starting out with large word sizes to represent numbers is the natural thing to do. The really interesting question is the process by which they became smaller.

4
  • 9
    The PDP-8 was patterned after the PDP-5. The PDP-5 was designed in response to customers who really needed something cheaper than a PDP-4. The smaller word size was part of that design goal. Commented Sep 6, 2020 at 13:12
  • 2
    @WalterMitty yes, I shortened the history somewhat.
    – dirkt
    Commented Sep 6, 2020 at 16:26
  • 4
    I only added the comments about the PDP-5 and the PDP-6 because they push the design decision back in time by about four years. The PDP-8 greatly outsold the PDP-5, and the PDP-10 greatly outsold the PDP-6. Commented Sep 6, 2020 at 18:36
  • 4
    The PDP-6 word size was influenced by the desire to be able to contain 2 addresses, so that a LISP cell would be one machine word.
    – dave
    Commented Sep 7, 2020 at 19:54
21

Longer words mean more bits can be processed at once. An 8 bit processor can perform a 32 bit calculation, but it has to do it in 4 stages of 8 bits each. A 32 bit processor can do it in one stage.

Since early computers had limited clock speeds due to slow electronics increasing the word size was one of the few options available to improve performance.

In the 70s the focus shifted to cost and 8 bit CPUs became popular. Word widths slowly increased as micro/personal computers became more popular and once again performance became a priority. Today some Intel CPUs have 512 bit word support for certain operations, all in the name of performance.

7
  • 12
    Intel CPUs still have 64-bit words. Those AVX-512 registers are for SIMD operations, i.e. operating on multiple words at once. They aren't general-purpose registers and lies inside a different unit, separating from the normal registers. Some other architectures have support for even longer SIMD registers
    – phuclv
    Commented Sep 6, 2020 at 11:26
  • 11
    +1, and I'd like to point out the slow speeds were in large part due to the long cables and high parasitic capacitance/inductance, which was unavoidable until integrated circuits were invented. ICs then allowed higher clock speeds, especially if the number of transistors was kept low enough to fit on a single chip.
    – jpa
    Commented Sep 6, 2020 at 15:48
  • 7
    It would be more accurate to say that modern x86-64 CPUs simply aren't very word-oriented at all. They can natively load/store any power-of-2 size up to 32 or 64 bytes, regardless of alignment within an L1d cache line, with no penalty as long as it doesn't cross a cache-line boundary. There is no one "word size" that have to use. (Although 64-bit is the upper limit for a single integer or FP value without extended-precision techniques, other than 80-bit x87 FP. SIMD vectors of multiple elements are a different thing from a larger "word size") Commented Sep 7, 2020 at 12:27
  • 1
    The last paragraph is misleading. Microprocessors were invented in the 1970s, and their limited word size was due to the limits of die size, not cost. They enabled new market segments (home computers, industrial controllers) and despite their lower cost they were not competitive against existing market segments for many decades (once their word size became large enough).
    – DrSheldon
    Commented Sep 9, 2020 at 13:56
  • 1
    @DrSheldon 16 bit microprocessors were available in the 70s, they were less common in microcomputers due to cost. Not just the cost of the chip, the cost of creating a PCB with a 16 bit bus at a time when high density PCBs were expensive and most layout was done by hand.
    – user
    Commented Sep 11, 2020 at 11:21
13

A possible answer occurs to me: it might be precisely because of the slow memory.

Say you want to add a pair of ten-digit decimal numbers, SUM += VAL, on a 6502. That chip has a BCD mode in which it can add two digits at a time; it has to do everything through an 8-bit accumulator. So we need a loop of five iterations, which we might unroll for speed. Each iteration will look like:

LDA SUM+0
ADC VAL+0
STA SUM+0

for offsets from 0 to 4 inclusive.

If we put the operands in zero page, that's thirty memory accesses for instructions, another fifteen for operands, forty-five memory accesses at maybe one microsecond each, plus however many more for overhead, still less than a hundred microseconds for the whole operation.

Now connect the 6502 to the memory drum of a 650. Suddenly the worst case memory access time is measured in milliseconds not microseconds. Some accesses might be amenable to near optimal placement, but not all. The whole operation will be orders of magnitude slower!

So that's an argument for needing wide registers. As user1937198 points out, the 650 could only add one digit at a time, so maybe adding a pair of ten-digit numbers takes ten CPU clock cycles, but that's okay; the point is that with the wide registers, it doesn't need a whole bunch of memory accesses in the middle of this.

2
  • 1
    Interesting point (already upvoted), but I think that much of the latency could have easily been solved with interleaving with most memory technologies, including drum, core, delay line, etc. Which memory technologies are you thinking of? Commented Sep 6, 2020 at 14:21
  • 1
    @OmarL Thanks! I'm thinking about drum, which is not so amenable to interleaving. Core is more amenable to it, and then in some cases the word size does come down when core is introduced.
    – rwallace
    Commented Sep 6, 2020 at 18:46
3

I'd suggest that one issue is that a 1950/60s mainframe was considered to be a significant corporate resource, and by and large enough would be spent on it that it could serve the needs of the entire company as efficiently as possible. The S/360-20 was a reduced-width entry-level system, and similarly DEC etc. minis attacked the mainframe market by being able to keep the price down due- in part- to using narrow registers and data paths.

I'd also suggest that computers which were at least in part intended for scientific use had a word size tailored to the particular signs+exponent+mantissa representation that that manufacturer used (typically around 48 bits), and that it made sense for commercial systems from the same manufacturer to use a comparable word size... insofar as they used registers for computation, rather than handling BCD arithmetic and string manipulation as memory-to-memory operations.

3

The premise isn't entirely true. The IBM 1401, perhaps the most popular computer of the 1960's, used a seven bit word (not including the parity bit). This was a business machine, not a number cruncher.

Mainframe computers optimized for scientific and engineering calculations used big words for the same reason that most computer languages of the 21st century use 64 bits for their default floating point. Numerical calculations need extra precision to guard against numerical instability. Routinely using multiple precision techniques was considered too inefficient. But personal computers did a lot more text processing and graphics than heavy-duty number crunching, so multiple precision was OK for the occasional calculation.

1

The 8086 addressing wasn't 20 bits, it's actually two 16-bit components (with a 16-bit ALU); those components being a segment and offset. It sounds like 16+16=32, but the actual location was segment*16+offset, and wrapping around at 2^20 (later chips like 80286 allowed not wrapping, see A20 line)

Usually this meant that e.g. for an array, you would allocate it to start a multiple of 16, and use that as the segment; then use the offset for the index within that array, always starting at zero. But it's very much using 16-bits at a time.

AVR is a "modern" 8-bit architecture; it might be the brains in your washing machine or microwave. See ATMEGA328p, or Arduino UNO. It's only got 8-bit words, 8-bit ALU; but addressing (2KB ram, 32KB flash) is done through multiple bytes. But because it's 8-bit, it's very much set up for handling numbers bigger than 8 bits; such as add-with-carry, etc.

A regular 64-bit x86 PC has 64-bit words; which is way too much memory to handle as adresses; They don't even allow using al of them; with the upper bits of an address being flags, with meanings other than just address. Last I checked, 48 bits limit, but that's only 256 TB; so they might be expanding soon.

7
  • Side note: the CPU actually had 20 address lines, i.e. externally (from the view of the motherboard), yes it was a CPU with 20-bit adresses. The segmentation was used only internally, i.e. there were no "20-bit pointers", instead 32-bit "pointers" were used, and converted on the fly to 20-bit addresses as you write.
    – peterh
    Commented Sep 8, 2020 at 8:41
  • Afaik even this 48-bit is only for hardware io mappings, on the ebay I could find servers with 1TB RAM at most (price about $50000).
    – peterh
    Commented Sep 8, 2020 at 8:47
  • @peterh-ReinstateMonica Big shared memory machines like the SGI (now HPE) UltraViolet series are routinely sold with a dozen TBs of RAM or so (IIRC supporting up to 64 TB RAM). I guess they just aren't commonly sold on eBay.
    – TooTea
    Commented Sep 8, 2020 at 14:19
  • @TooTea Thanks, but I think the CPUs see here different address spaces. If I guess well, a single CPU can see only a part of the whole memory, making its address pins partially unused.
    – peterh
    Commented Sep 8, 2020 at 14:23
  • 1
    @peterh-ReinstateMonica Well, of course only a part of the RAM modules is physically connected to the pins of the integrated memory controller in any single CPU. However, all CPUs can access all of the memory through the interconnect (QPI and NUMAlink). It works exactly the same as any ordinary dual socket server.
    – TooTea
    Commented Sep 8, 2020 at 14:38
1

Given the small (by today's standards) memory, it was very convenient to be able to include a full memory address within a machine instruction.

For instance, Honeywell 6000 assembler instructions looked like this:

enter image description here

The first half of the instruction could contain a full memory address, so instructions such as load-register were self contained. The complications of segmented memory were completely avoided.

The address section could also be used to contain literal data, providing "immediate" instructions (e.g. the literal value 123456 could be in those first 18 bits, and the machine instruction could say to add that value to a specific register). What would later, in the *86 processors, take several instructions (to build an address, load its contents, add it to a register, and copy it to another register) was fast and trivial.

1

Many early machines processed data in bit-serial fashion, which meant that doubling the word size would reduce the number of words that could be held by a given number of memory circuits, but wouldn't increase the required number of processing circuits. To the contrary, cutting the number of discrete addresses would reduce the amount of circuitry necessary to access them.

Further, while it might seem that using e.g. an 18-bit word to hold a value that would always be in the range 0-255 would be wasteful, having instructions that can process either long or short integer types would add complexity, and having hardware use short integer types would increase the number of instructions would be needed to operate on longer ones.

For integers that are not part of an array, the amount of storage needed to hold the instructions that work with them will almost always be much larger than the amount of storage to hold the values themselves. Thus, even if 75% of such integers would only need a half-word to hold them, doubling the amount of code needed to handle the other 25% would outweigh any savings from using a smaller word size.

Incidentally, one advantage of load-store architectures is that it allows a system to reap most of the benefits of being able to work with mixed-sized objects, while only having to add multiple-size support to only a few instructions (loads and stores), rather than to all instructions.

Perhaps it would have made sense to have machines wired so that part of their memory space is occupied by full-width memory and part of it only has half of the data bits connected. This was sometimes done even into the 1990s with things like the display memory on true-color video cards that was frequently wired so that only 3/4 of the bytes would be populated. Such designs, however, would tend to limit the use of the memory systee to certain specific purposes. That makes sense for something like a 640x480 "true color" video card, but less sense for a general-purpose computer.

1

The early computers were created to do high precision scientific calculations that couldn't be done by hand (practically).

The newer computers you mention from the 70s and 80s where business and home computers.

And you are mistaken in saying that it was not memmory addressing that motivated the increase in word size from 32 to 64 bits. 32 bits were sufficient for home applications (16 bits weren't), but large size computing was very much pushing that boundary. Before 64-bit processors, Intel already had introduced a scheme** to increase the address space beyond 32 bits. Home video games from the early 90s had 16-bit data words but already needed 24-bit addressing.

** called 'physical address extension, if i'm not mistaken.

1
  • Good point about the different markets! But to be clear, I said memory addressing was indeed what motivated the recent increase from 32 to 64 bits - but was not what motivated the large word sizes of early computers.
    – rwallace
    Commented Sep 15, 2020 at 17:13
0

Early computing was dominated by batch processing, a program would run to completion without waiting for IO devices except storage. When a program was finished the next program (or batch of data) would be run, possibly for a different user.

Wider registers and memory or ALUs would make computers faster and therefore requires less computers for the same throughput, that is less memory and control logic and a similar amount of register, ALU and memory interface for the same task.

Later computers started to be used for tasks that were IO bound these reduced the memory savings of a fast CPU, as a fast CPU did not reduce the total run time of the program, though some saving is possible by using slower memory and copying to fast memory as required. That's why early home computers were typically stand-alone 8-bit systems, not dumb terminals connected to mainframes.

Text processing also became more common for which large word sizes are less of an advantage.

The reduced cost (and miniaturization) of computers made cost of using multiple smaller and slower computers cheaper than communications (and later admin) costs of a few larger computers.

2
  • 1
    Even programs running in batch mode without multitasking would pause to allow the operator to input data using the machine's sense switches or (in the case of more advanced systems) the control terminal. Commented Sep 8, 2020 at 19:13
  • Not to mention that reading the batch itself, as well as all associated data stacks, is I/O bound and contains a lot of waiting.
    – Raffzahn
    Commented Sep 8, 2020 at 19:40

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .