2

The IBM 7030's fixed-point arithmetic model was unusual: binary numbers could have any number of bits from 1-64. Similarly, PL/I's FIXED BINARY data type has a variable number of bits. On the other hand, the first PL/I implementation was on System/360, with its now-familiar byte/halfword/word/doubleword arithmetic. PL/I's arithmetic is unnatural on such an architecture.

So, why did IBM choose that model? When PL/I was formulated, the 7030 was IBM's most advanced machine. Did PL/I's designers see the 7030 model as the future, despite the 7030's failure as a product?

25
  • 2
    Keep in mind that when PL/1 was designed, there were all sorts of exotic bit lengths around.
    – tofro
    Commented Jun 18, 2021 at 13:23
  • 1
    @Raffzahn - I've never heard the 360's float format described as "hex float" - it's always been described as a binary format but with a base-16 exponent.
    – davidbak
    Commented Jun 18, 2021 at 14:41
  • 1
    @Raffzahn "why should a decimal loop counter be converted to binary over and over - or at all?" Well, I was being a bit careless, assuming that "every programmer knows" loop counters are often used as array subscripts. And AFAIK the S/360 instruction set only used binary data for addressing memory.
    – alephzero
    Commented Jun 18, 2021 at 15:18
  • 1
    @Raffzahn You still haven't answered the simple question "I give you FIXED BIN X;, what is the type of X/3?" Easy to answer for C: int x; x/3 has type int.
    – John Doty
    Commented Jun 18, 2021 at 15:25
  • 1
    @Raffzahn I take it you only have theoretical knowledge of PL/1. And you don't seem to realize the difference between fixed decimal/binary variables and fixed decimal/binary constants. A constant like "1" is a fixed decimal constant, but was optimized into the fixed binary constant "1B" by the compiler. AFAIK the S/360 instruction set has no instructions that use decimal constants for address calculations.
    – alephzero
    Commented Jun 18, 2021 at 15:27

1 Answer 1

3

TL;DR:

The IBM 7030's fixed-point arithmetic model was unusual: binary numbers could have any number of bits from 1-64. Similarly, PL/I's FIXED BINARY data type has a variable number of bits.

Coincidence. PL/1 simply uses an abstract, non machine specific way to define the entities it handles - like any good HLL should do.


System/360, with its now-familiar byte/halfword/word/doubleword arithmetic. PL/I's arithmetic is unnatural on such an architecture.

I do not really see a point of being 'unnatural' here. PL/I is supposed to be a high level language, usable on various architectures. So why should it add machine specific data types - possibly several of the same kind (like INT2, INT3, INT4) - when it can define the needed precision in an abstract way?

C did go the 'simple' way of using machine types leading to a pletoria of data types with overlapping meaning, unclear implementation and lots of pitfalls when porting programs. I still get sick when just thinking about some header files I had to read over the years trying to cope with this mess.

PL/I mechanics instead allows a clear definition what a programmer wants in a value. It's a clear and machine independent structure of

  • Basic Representation : BINARY / DECIMAL
  • Sign handling: ​SIGNED / UNSIGNED
  • Scaling: FIXED / FLOAT
  • Mode: REAL / COMPLEX
  • Precision as number of digits

The result is a machine independent definition that allows portage of programs between vastly different architectures without the need to rewrite anything - seems great for a HLL, doesn't it?

15
  • 1
    In practice, quite difficult to use. Things like being careful to never let arithmetic promote precision beyond what fit in a register. I recall writing a lot of fixed bin(17) declarations to avoid expensive, extraneous code. Telling learners "never, ever, use / to divide fixed point, always use divide". All kinds of gotchas. C was such a breath of fresh air after that: just give me data types that behave simply, efficiently, and predictably.
    – John Doty
    Commented Jun 18, 2021 at 12:55
  • 1
    C was absolutely not bad at portability compared to PL/I. When UNIX first came out, it was rapidly ported to a variety of architectures with little difficulty (compare to Multics). But I never saw a PL/I program that was practical to run on both GE/Honeywell and IBM hardware. You had to tune your code to the specific machine in PL/I, but C doesn't need that except in relatively rare corner cases.
    – John Doty
    Commented Jun 18, 2021 at 13:06
  • 2
    Then why did all the practically portable C programs out of Bell Labs and Berkeley dominate the space that PL/I was targeted for?
    – John Doty
    Commented Jun 18, 2021 at 13:40
  • 2
    I would argue that C's type system is much clearer than PL/I's. Quick, without consulting a manual, if I give you FIXED BIN X;, what is the type of X/3?
    – John Doty
    Commented Jun 18, 2021 at 13:42
  • 2
    "The result is a machine independent definition that allows portage of programs between vastly different architectures without the need to rewrite anything" Cue hollow laughter from anyone who actually tried to do that for non-trivial application code. It simply didn't work. We tried, more that once, with several different target machines and compilers. The success rate was exactly zero (in binary or decimal).
    – alephzero
    Commented Jun 18, 2021 at 15:35

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .