1

According to wiki IBM System 360 had byte addressable RAM.

Previously IBM had machine with word addressable memory. Did they make a switch for comparability between different machines?

Or it was just performance or money or single symbol size reasoning behind it?

7
  • 1
    I do not understand what your question is about. Byte addressing is about logic view on memory, nothing else. As such it does not have any implications beside a byte being the smallest direct addressable unit. So, what again is your question?
    – Raffzahn
    Commented Jul 7, 2020 at 20:07
  • 1
    @Raffzahn my question is about why did IBM switch to byte addressable from word addressable? I though it was because of comparability issues.
    – No Name QA
    Commented Jul 7, 2020 at 20:26
  • 1
    Again, I do not get what you're really asking. Byte addressing allows to address bytes. that's all. Data paths are independent wider or smaller than words and have no relation beyond the obvious. (P.S.: mind to add location to your bio? Maybe we're hitting again a language barrier?)
    – Raffzahn
    Commented Jul 7, 2020 at 20:31
  • 1
    When your word size is 32 bits, and memory is expensive, chances are that there are smaller units within that word (BCD digits, characters, bytes, half-words) you may want to address individually. At some point in the design process, someone probably sat down and tried to work out which of these sub-units would be most commonly used, and which would profit the most from having hardware / instruction set support. Apparently byte adressability was considered important, but (AIUI) half-word and word access exist, too. Commented Jul 7, 2020 at 21:56
  • 2
    They also previously had a machine with decimal digit addressing: The 1620. Before the 360 there weren't "lines" of computers, at least at IBM. There were only computers addressing various markets and "sizes": e.g., large scientific (7090), small scientific (1620), small business (1401). The 360 was IBM's attempt to make a broad architecture that could support a family of implementations of various sizes and for various markets. So all decisions were made anew looking to the future (though informed by their previous experiences). (See Brooks The Mythical Man Month for more on this.)
    – davidbak
    Commented Jul 7, 2020 at 22:40

3 Answers 3

4

Programmers do need to process characters from memory, and characters are generally smaller than the machine word. The architectural possibilities seem to be:

  1. Instructions that only read/write words. Programmer must use explicit shift, mask, logical-or to operate on characters.

  2. There are special instructions that use a "character address" that augments the memory address with details of which character is needed. (The CPU transfers whole words and has internal logic to extract/insert bytes)

  3. The character address format is used for all instructions: this is "byte addressable memory". (It does not necessarily follow that the memory itself can transfer on an arbitrary byte boundary; CPU-memory traffic may still be word-oriented).

Consider a machine with 32-bit words and 8-bit characters. You can specify 'which character' as a 2-bit integer. To operate on a (address-of-word, character-in-word) pair, it's convenient to be able to pack them as a 32-bit value: 30 bit address, 2 bit character number.

If you put the character number at the low end of the 32-bit value, 30-bit address at the high end, and use that form of address in all instructions, you've just invented byte-addressed memory.

In this "just-so story", it looks like an obvious progression. Mind you, I say that with hindsight (though hindsight that comes from having programmed all 3 types of machines).


Footnote: the ICL 1900 had a 'character modifier' but the 2-bit character selector is at the high end of the word rather than the low end. Ah, so close. Though the 24-bit 1900 was impoverished for address bits, so could not have afforded general character-addressing anyway.

2
  • Than you! So in short it seems that they needed a byte addressing just to effectively process 8-bits EBCDIC symbols, right? And if symbols were, let’s say 9-bits, they would create RAM with 9-bits addressing, right?
    – No Name QA
    Commented Jul 8, 2020 at 4:00
  • Let's be clear. This is not "RAM addressing" as in anything a memory unit necessarily sees. It's the address structure of the instruction set implemented by the CPU. The actual memory hardware may, for example, never see the 2 low bits of the address. But apart from that quibble: yes, the granularity if addressing matches, by design, the unit size of information to be processed. The S/360 designers discussed 6 bit versus 8 bit char sizes (discussed elsewhere in this forum) and if 6 bit had won, I suppose we might have had memory addressable in units of 6 bits.
    – dave
    Commented Jul 8, 2020 at 11:55
0

The 360 was designed as an all-purpose system. That means that among other things, it should be suitable for processing text.

Nowadays, a computer for processing text could perfectly well go with 32 bit addressing unit and take the attitude that you always use Unicode, but in those days Unicode didn't exist and it would've been too expensive to use 32 bits per character. You want a smallest addressing unit that is reasonably efficient for storing characters.

6 bits mean uppercase only, and they were probably already thinking about word processing, which really wants lowercase.

7 bits would be a perfect match for ASCII, but engineers would instinctively recoil from basing the word sizes on a prime number.

8, 9, 10 all work well. Of those, 8 has the lowest overhead per character.

5
  • 2
    8, 9, 10 all work well. Plenty of 9-bit and 36-bit uses (often in the same machine). Haven't seen too many 10s. Commented Jul 7, 2020 at 22:33
  • 1
    I believe 8 bits were used as you need that many to represent all the characters in the EBCDIC character set that the machine used. Because of something to do with the layout of Hollerith punched cards if I remember correctly. Commented Jul 7, 2020 at 23:56
  • @PaulHumphreys Very good point! Quick check confirmed: EBCDIC does indeed use all eight bits (with large chunks of interspersed unused code space).
    – rwallace
    Commented Jul 8, 2020 at 0:31
  • 1
    I think it wasn't that 7 was a prime number, but that 8 is much better for handling BCD numbers. 8 bits can hold two decimal digits as aaaabbbb, 7 bits can hold 0..99 but only as a binary number, which would be harder to process. Commented Jul 8, 2020 at 13:58
  • 1
    Aren't powers of 2 preferred in computing? Commented Jul 10, 2020 at 16:47
0

According to wiki IBM System 360 had byte addressable RAM.

Yes.

[Considering this and the title "Why did IBM System 360 have byte addressable RAM" it feels as there's a mixup about what addressing and RAM means. See some thoughts about that at the end)]

Previously IBM had machine with word addressable memory.

No. Only very few.

IBM did make all sorts of machines, including the bit addressable as explained here and here.

In detail the most used machines are:

  • 1401 used byte addressing - at the time called character addressing - with 6 bit bytes
  • 1620 used decimal addessing with decimal bytes (one digit per byte)
  • 1710 - see 1620
  • 7030 used bit addressing

Speaking of 7030, a 700/7000 family is often assumed, but in reality its a more of a marketing thing, were IBM tried to press all CPUs into an 70xx numbering scheme, as from hardware, as well as software they were vastly different lines:

  • 701 - half word addressing (19 units build)
  • 702 - character addressing (14 units build)
  • 704 - word addressing
  • 709/704x/709x - like 704
  • 705/7080 - character addressing
  • 7010 - character addressing (top end 1400)
  • 7030 - bit addressing
  • 707x - decimal words of 10 (like 650 calculator)

So of all of these only the 704x/709x CPUs used word addressing. And while it includes some of the most powerful (well, outclassed by CDC already before the /360 came) and expensive, their numbers were quite low (*1)

Bottom Line: Most pre-/360 machines were byte addressable (of various size), not word addressable.

Did they make a switch for comparability between different machines?

Why should they? I would know of no reason. Comparability is an external request, nothing a producer needs nor wants. marketing loves to sell things that are not as easy comparable :)

As explained here the /360 was the follow up to all the different machines - with only a few of those being word addressable. See above.

Or it was just performance or money or single symbol size reasoning behind it?

Pick whatever you want. The /360 was intended to be a single ISA capable to be tailored to all needs from low end business to high end scientific.


Now the promised thoughts:

Could it be, that your thoughts are stuck inbetween addressability as defined in the ISA (InstructionSet Architecture) and seen from a programmers view and the memory interface as seen from hardware?

An ISA is the abstract view of Hardware a programmer will interact with. It's the way the machine looks to him. Addressing on ISA side describes the granularity an instruction can use to address data. While this may vary between instructions and access type (for example due restrictions of alignment), the smallest size that can be addressed directly with a complete address is considered the one defining capabilities. In case of IBM that's the byte. Each regular addressing within an instruction can point to any byte in memory.

Words and alike are formed by multiples of bytes and may or may not cover only a limited address range - like the /360 requiring words to be alligned to multiples of 4, thus leaving the two lowest bits of any word address zero.

This definition is only valid within its ISA and not necessary related to the hardware at all.

On the hardware side memory is always word-accessed, with word being of arbitrary size, independent of word (or byte) size defined by the ISA. The /360 is a great example here, as its ISA presents a plain 32 bit world with 24 bit addressing and 8 bit bytes. But at the memory interface many sizes were used depending on machine type and time. Starting from 16 and 32 bit for the earliest implementations up to 64, 128, 256 and more later on.

It's the task of the memory interface to map bytes, words or whatsoever the ISA side requests onto its own memory word and back.

This abstraction level was already used before the /360, as for example (AFAIR) a character addressing 7010, a word addressing 7090 and a bit addressing 7030 could all use the same memory subsystem made of 36/72 bit words.


*1 A few hundret for all of them combined, while the 1401 alone accounts for more than 10,000 units.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .