5

I'm currently reading about the IBM system/360 architecture and there's a part that has me very confused:

The decision on basic format (which affected character size, word size, instruction field, number of index registers, input-output implementation, instruction set layout, storage capacity, character code, etc.) was whether data length modules should go as 2n or 3.2n.

Why would a data length module would be 3.2n?

4
  • 1
    What is a "data length module?" I think I know what it is from context, but I've never heard this term before and wonder if it's an S/360 specific term, or a typo. Commented Apr 10, 2020 at 21:14
  • @WayneConrad I'm in the same boat as you. I hope I will understand better as I keep reading the paper.
    – KetDog
    Commented Apr 10, 2020 at 21:22
  • Can you point us to what you're actually reading? Commented Apr 10, 2020 at 22:33
  • 1
    The extract is from the 1964 paper Architecture of the IBM System/360 by Amdahl, Blaauw, and Brooks.
    – dave
    Commented Apr 10, 2020 at 23:11

2 Answers 2

8

Note that 3.2 is the square root of 10 rounded up to the closest value with one digit after the decimal.

Thus, every other data length module will be slightly greater than a power of 10. Apparently there was an expectation that data block lengths will be typically close to powers of 10, achieving good capacity utilization, while providing acceptable intermediate sizes for other block lengths.

The values of 3.2n, rounded to the nearest integer, are 3, 10, 33, 105, 336, 1074, ...

6
  • 8
    It might be useful to add that at that time decimal based computers were still a thing in 1960. This meant not only decimal arithmetic instead of binary, but decimal addressing as well as block sizes of 10/100/1000. So making the /360 strictly binary (and BCD just an optional user data type) in all details (4/8/16/32 bit units, 12/24 bit addressing, 2048/4096 byte data blocks etc.) war quite revolutionary - next to all machines before were a mixup of different based elements.
    – Raffzahn
    Commented Apr 10, 2020 at 8:18
  • 1
    @Raffzahn - correct, and in that timeframe even the size of "blocks" on the disk wasn't defined as a power of 2 - in fact, disks weren't even divided into "blocks". You could write records of any length at all (as long as it would fit in a "track"). So there was quite a bit of flexibility available to the designers.
    – davidbak
    Commented Apr 10, 2020 at 15:06
  • 1
    (Oddly enough, w.r.t. to disks, we now have our feet firmly planted on both sides of the divide: we read/write in chunks of powers of 2 but we purchase in chunks of powers of 10!)
    – davidbak
    Commented Apr 10, 2020 at 15:11
  • 1
    If the explanation for that section was that data block sizes were expected to be close to a power of 10, wouldn't the author have characterized the decision as 2^n versus 10^n?
    – dave
    Commented Apr 11, 2020 at 16:53
  • 1
    I think 3.2^n should be read 3 times 2 to the power n, making this answer irrelevant. Commented Apr 14, 2020 at 7:54
11

The notation 3.2n looks to me like it means 3 x 2n rather than (3 point 2)n.

So the question is whether data lengths should be based on a 6-bit unit or some 'binary' size, in practice 8 bits.

The dominant character size at the time was 6 bits (with 7-bit ASCII just emerging). A 36-bit word length was also common, and was in fact the word size for the successful 709/7090/7094 family.

There were definitely arguments made for keeping a 6-bit size to 'save storage'.

I do not have an authoritative reference to hand, but Wikipedia notes that one influential S/360 feature was:

The 8-bit byte (against financial pressure during development to reduce the byte to 4 or 6 bits), rather than adopting the 7030 concept of accessing bytes of variable size at arbitrary bit addresses.


Aha, per Fred Brooks: see page 25

There was one very big difference, and that is Gene’s machine was based on the six-bit byte and multiples of that so 24-bit instructions, 48-bit floating part and Jerry’s machine was based on a 8-bit byte and 32-bit instructions, 64-bit, and 32-bit floating point, which is not a real happy choice, but… there are strong arguments each way. And you want your architects to be consistent. You’re not going to have an 8-bit byte and 48-bit instruction floating point word. And, so, then came the biggest internal fight, and that was between the six and eight bit byte, and that story has been told. Gene and I each quit once that week, quit the company, and Manny

(Gene is Amdahl, Jerry is Blaauw)

24 is 3 x 23, 48 is 3 x 24.


Futhermore:

The extract in the question comes from a 1964 IBM Journal paper Architecture of the IBM System/360, by Amdahl, Blaauw, and Brooks. The quoted text is on page 91 of the journal, or 4 or 5 pages into the paper.

The extract is immediately followed by a discussion of character sizes (6 versus 8 bits) and floating-point operand sizes (48 versus 32/64 bits). This makes it clear that 3.2n is to be understood as 3 x 2n.

Maybe we can blame the compositor for setting 3.2n rather than 3·2n.

0

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .