5

According to wiki IBM System /360 had only 32 and 64-bits registers for data.

I'm wondering if they used 8-bits symbol it means that they stored it in a 32-bits register.

Did they have any performance improvements for such decision? If so, why do we have 8-bits registers today?

Moreover, do we have any performance improvements if we store ASCII symbol in 8-bits register vs 32-bits register?

7
  • 1
    Not a full answer, as I know little about /360, but the reason we have 8-bit registers in x86 processors is mostly for backward compatibility: they were designed so that programs written for ths 8-bit 8080 processor could be automatically translated to run on them.
    – occipita
    Commented Jul 7, 2020 at 10:55
  • Side note: ASCII is long obsolete, except in a retrocomputing forum. If you want byte-oriented character codes, use UTF-8.
    – dave
    Commented Jul 7, 2020 at 11:25
  • 3
    @another-dave: UTF-8 may be better than ASCII for many desktop applications, but is grossly impractical for many embedded purposes.
    – supercat
    Commented Jul 7, 2020 at 17:32
  • 1
    It's not a matter of better or worse these days; it's a matter of the character code actually used by programming platforms. For example, if you think Java characters are ASCII, you're likely to write buggy code.
    – dave
    Commented Jul 7, 2020 at 18:18
  • 1
    ARM also has byte-addressable RAM and 32-bit registers. Commented Jul 8, 2020 at 8:49

3 Answers 3

11

Why did IBM System /360 have byte addressable RAM, but didn't have 8 bits registers

These issues are unrelated.

Registers are about addressing, so they need to hold an address word. Byte addressable RAM in turn is needed to handle bytewide data - most notable characters and strings. There is no inherent need that a CPU must have byte sized registers as well.

According to wiki IBM System / 360 had only 32 and 64-bits registers for data.

True.

(Except, the /360 had only one size, 32 bit. 64 was when combining two for a few (essentially two) instructions)

I'm wondering if they used 8-bits symbol it means that they stored it in a 32-bits register.

Well, yes, getting an 8 bit value into a register was a pain in the ass (*1)

Did they have any performance improvements for such decision?

No, as one rarely had to load/store an 8 bit value to/from a register. Registers of a /360 are meant mainly for address handling/calculation plus integer arithmetic which essentially is the same. All register to register instructions (Like AR for adding two registers) are 32 bit only. Memory structures to interact with registers are words (32 bit) and halfwords (16 bit), the later always extended to 32 Bit when fetched (usually sign extended). Yielding for example two add instructions, A to add a word and AHto add a (sign extended) halfword.

Registers were (usually) never used to handle character data, as the /360 wasn't a strict accumulator (set) machine, but as well a memory to memory architecture (*2). A string would be transfered with a single MVC dest,src instruction (*3). There was no need running a loop to fetch and transfer single bytes. Similar a string compare was done by a CLC. This of course included logic operations as well, two strings could be ANDed, ORed or XORed. Nifty, isn't it?

It can to some degree compared with the x86's string instructions, just sans the cumbersome setup.

In fact, this approach does impact performance in a positive way. String instructions formulate a task on higher level than byte access, allowing hardware to handle these strings in bigger chunks than single bytes. After all, already back then memory interface was not only independent of word size, but as well much wider. Were low end machines used a 16 bit memory interfece, high end had already in the 1970s up to 256 bit wide memory access (*4). That's 32 bytes per fetch much like modern GPU's isn't it? To transfer a 20 byte string only 2 to 4 fetches had to be made, which is way better than doing 40, isn't it (*5)?

All this came due the natural benefit of string orientated instructions, speeding up operations way before adding caches and alike. Intel's (and many modern ISA) never had that advantage and needed to invest way more into fetch/write access bundling strategies and, above all, multi level cache.

If so, why do we have 8-bits registers today?

I assume you're talking about intel x86, right?

Sorry to disappoint you, but there are none. The registers are 64 bit and so called 8 bit registers are only aliases to address part of one. These aliases are needed still needed to handle any kind of byte stream - like this web page.

In some way they can be seen as left over from history. Keep in mind, Assembler is an abstract way to look at/describe a CPU's working, in some cases representing how the underlaying hardware works, but more often it is not. Of course, one could simply make up a 'new' Assembler syntax (*6) eliminating them. Maybe like Motorola's modifiers .b or alike. It's always good to not confuse the logic model presented by books and Assemblers with implementation.

Moreover, do we have any performance improvements if we store ASCII symbol in 8-bits register vs 32-bits register?

As said, there are none, the 8 bit values get always promoted into the 32 (64) bit registers. But there is an advantage not visible from the ISA/Assembler level: Designers may give certain instructions a shorter encoding. So for example Adding an immediate to a register will take 2 bytes opcode plus the value, while doing so with AL/AX only takes 1 byte plus value.

Up to the 80286 there were only two basic data types, 8 and 16 bit. When the 386 was to be made they had the need to introduce another basic data type for 32 Bit. But there was not way to squeeze it into the encoding, except adding a prefix byte for it. Not really cool, as it would bloat 32 bit code with next to all register instructions needing another byte for encoding. So they settled to a mode were all previous 16 bit encodings now meant 32 bit (*7), while keeping 8 bit as first class member for character handling (*8).

So there is the advantage: In addition of lower amount of data to be moved, the instruction doing so are shorter as well, resulting in less code fetch and thus higher thruput. Which of course still does not make up the penalty of not having simple to use high level string instructions.


*1 - Ok, not really, but it (usually) took two instructions for clearing and inserting the character: XR Rx,Rx and IC Rx,addr

*2 - The /360 is essential a unification of the three prior CPU families (1401, 1620 and 7090) with different structures (decimal character, decimal numeric and FP/integer word orientated). Yielding a machine which is less of a one size fits all, but a toolset covering all. One could see it as unifying five ISA approaches:

  • Integer
  • Floating point
  • Decimal (character-oriented)
  • Decimal BCD
  • String processing

*3 - Well, up to 256 bytes that is. for more than 256, a MVCL had to be set up, able to handle up to 16 Magebytes in a single instruction - including fillup mechanic :)

*4 - Today one might call it the memory bus, except, it was't a bus, but the interface between memory and CPU, handled by a dedicated unit on each side.

*5 - Due the way the micro program was structured, the average number of fetches was only slightly above 2. close to optimum.

*6 - Like NEC did for their V-series 8086 compatible CPU's. Here the registers were named AW...DW instead of AX..DX, IX/IY instead of SI/DI, DS0/DS1 instead of DS/ES.

*7 - It's WAY more complicated than that, but that's material for a question of its own (at least one).

*8 - Noone back then would have thought we could settle so soon for UCS2 or UCS4

5
  • 2
    Often, clearing the other 24 bits of the register was unnecessary, even when you wanted to do arithmetic on character data if you only cared about 8 bits of the final result. And using the "translate" and "translate and test" instructions, you could often operate on character data without doing explicit arithmetic or using AND/OR type logic to twiddle individual bits.
    – alephzero
    Commented Jul 7, 2020 at 13:37
  • @alephzero :) sure, these are the detailed ways to handle it, still not changing the fact that registers weren't really meant for character handling at all.
    – Raffzahn
    Commented Jul 7, 2020 at 13:45
  • 1
    All of the above string instructions plus don't forget TR and TRT - super powerful, with applications beyond the obvious.
    – davidbak
    Commented Jul 7, 2020 at 14:38
  • @davidbak and CLCL and MVO/N (yes, it can be used beside BCD) and so on. TR(T) is an extrem versatile beast when combined. Want to test a string for certain characters? Make up a table with an entry for each character with the bits for attributes (like alpha, case, num, punctuation, whitespace) and translate a copy using a single TR String,Tab. Then OR it together using a single OC String+1(len-1),String and the last byte will contain the combined attributes. Use NC alike. And that's only the start. Building parsers can be done in a few machine instructions. Who needs so called HLLs?
    – Raffzahn
    Commented Jul 7, 2020 at 14:49
  • The 8088's register set allowed assembly-language programmers to use six general-purpose 16-bit registers, or five general-purpose 16-bit and two 8-bit, or four 16-bit and four 8-bit, three 16-bit and six 8-bit, or two 16-bit and eight 8-bit registers. This was very handy for when writing assembly-language code for tasks which needed more than six registers, but didn't need them all to be capable of holding more than eight bits, but compilers for high-level languages didn't really attempt to exploit this.
    – supercat
    Commented Jul 7, 2020 at 17:17
6

There is no downside I can see to storing an 8-bit quantity in a 32-bit register if you already have the 32-bit register.

Load/store take the same amount of time. Memory transfers are at least word-sized (maybe larger) anyway.

Arithmetic in general is not faster on smaller binary numbers.

You'd need more opcodes or addressing modes, potentially consuming more bits in instructions.

It would be adding hardware for no compensating improvement.

As mentioned elsewhere, some microprocessor architectures have "8 bit registers" for strict compatibility with their predecessors.


System/360 had a specific design goal of character addressability:

  1. The general addressing system would have to be able to refer to small units of bits, preferably the unit used for characters

(Architecture of the IBM System/360, by Brooks, Blaauw, and Amdahl)

That explains the presence of byte addressing. For the reasons already discussed in the various answers here, that does not imply the need for byte registers.

2

Did they have any performance improvements for such decision? If so, why do we have 8-bits registers today?

Most 32-bit non-x86 CPU types today still only have 32-bit registers but they can access memory byte-wise. Examples are ARM, MIPS, PowerPC, Sparc, TriCore, RH850, SH CPUs and there are a lot more.

So your observation is not something which is typical for the S/360, but it is something which is typical for a lot of CPU types.

I'm wondering if they used 8-bits symbol it means that they stored it in a 32-bits register.

I don't know about the S/360, but in modern CPUs having only 32-bit registers, you would load the byte into the low 8 bits and set the upper 24 bits to zero if the value is an unsigned byte. If it is a signed byte, you typically sign-extend the low 8 bits to 32 bits.

Moreover, do we have any performance improvements if we store ASCII symbol in 8-bits register vs 32-bits register?

Yes and no:

On an x86 CPU, we can use two different 8-bit registers (for example AH and AL) instead of a single 32-bit register (EAX).

Having more registers available allows more efficient programs because less memory accesses are needed.

On a m68k CPU, you can either use a certain 8-bit register or the corresponding 32-bit register. So you don't have the advantage of having twice the number of registers.

However, the early m68k CPUs (for example 68000 and 68008) required more time for performing 32-bit operations than for performing 8- and 16-bit operations; so using the 8-bit register was faster.

For modern 32-bit CPUs this is no longer the case. And I think it was also not the case for the S/360.

For this reason, using an 8-bit register instead of a 32-bit register would bring no performance benefit unless you could use more 8-bit registers instead of a single 32-bit one (e.g. AL and AH instead of EAX).

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .