Skip to main content
added 160 characters in body
Source Link
Raffzahn
  • 228.2k
  • 22
  • 658
  • 941

TL;DR:

The IBM 7030's fixed-point arithmetic model was unusual: binary numbers could have any number of bits from 1-64. Similarly, PL/I's FIXED BINARY data type has a variable number of bits.

Coincidence. PL/1 simply uses an abstract, non machine specific way to define the entities it handles - like any good HLL should do.


System/360, with its now-familiar byte/halfword/word/doubleword arithmetic. PL/I's arithmetic is unnatural on such an architecture.

I do not really see a point of being 'unnatural' here. PL/I is supposed to be a high level language, usable on various architectures. So why should it add machine specific data types - possibly several of the same kind (like INT2, INT3, INT4) - when it can define the needed precision in an abstract way?

C did go the 'simple' way of using machine types leading to a pletoria of data types with overlapping meaning and, unclear implementation and lots of pitfalls when porting programs. I still get sick when just thinking about some header files I had to read over the years trying to cope with this mess.

PL/I mechanics instead allows a clear definition what a programmer wants in a value. It's a clear and machine independent structure of

  • Basic Representation : BINARY / DECIMAL
  • Sign handling: ​SIGNED / UNSIGNED
  • Scaling: FIXED / FLOAT
  • Mode: REAL / COMPLEX
  • Precision as number of digits

The result is a machine independent definition that allows portage of programs between vastly different architectures without the need to rewrite anything - seems great for a HLL, doesn't it?

System/360, with its now-familiar byte/halfword/word/doubleword arithmetic. PL/I's arithmetic is unnatural on such an architecture.

I do not really see a point of being 'unnatural' here. PL/I is supposed to be a high level language, usable on various architectures. So why should it add machine specific data types - possibly several of the same kind (like INT2, INT3, INT4) - when it can define the needed precision in an abstract way?

C did go the 'simple' way of using machine types leading to a pletoria of data types with overlapping meaning and unclear implementation.

PL/I mechanics instead allows a clear definition what a programmer wants in a value. It's a clear and machine independent structure of

  • Basic Representation : BINARY / DECIMAL
  • Sign handling: ​SIGNED / UNSIGNED
  • Scaling: FIXED / FLOAT
  • Mode: REAL / COMPLEX
  • Precision as number of digits

The result is a machine independent definition that allows portage of programs between vastly different architectures without the need to rewrite anything - seems great for a HLL, doesn't it?

TL;DR:

The IBM 7030's fixed-point arithmetic model was unusual: binary numbers could have any number of bits from 1-64. Similarly, PL/I's FIXED BINARY data type has a variable number of bits.

Coincidence. PL/1 simply uses an abstract, non machine specific way to define the entities it handles - like any good HLL should do.


System/360, with its now-familiar byte/halfword/word/doubleword arithmetic. PL/I's arithmetic is unnatural on such an architecture.

I do not really see a point of being 'unnatural' here. PL/I is supposed to be a high level language, usable on various architectures. So why should it add machine specific data types - possibly several of the same kind (like INT2, INT3, INT4) - when it can define the needed precision in an abstract way?

C did go the 'simple' way of using machine types leading to a pletoria of data types with overlapping meaning, unclear implementation and lots of pitfalls when porting programs. I still get sick when just thinking about some header files I had to read over the years trying to cope with this mess.

PL/I mechanics instead allows a clear definition what a programmer wants in a value. It's a clear and machine independent structure of

  • Basic Representation : BINARY / DECIMAL
  • Sign handling: ​SIGNED / UNSIGNED
  • Scaling: FIXED / FLOAT
  • Mode: REAL / COMPLEX
  • Precision as number of digits

The result is a machine independent definition that allows portage of programs between vastly different architectures without the need to rewrite anything - seems great for a HLL, doesn't it?

Source Link
Raffzahn
  • 228.2k
  • 22
  • 658
  • 941

System/360, with its now-familiar byte/halfword/word/doubleword arithmetic. PL/I's arithmetic is unnatural on such an architecture.

I do not really see a point of being 'unnatural' here. PL/I is supposed to be a high level language, usable on various architectures. So why should it add machine specific data types - possibly several of the same kind (like INT2, INT3, INT4) - when it can define the needed precision in an abstract way?

C did go the 'simple' way of using machine types leading to a pletoria of data types with overlapping meaning and unclear implementation.

PL/I mechanics instead allows a clear definition what a programmer wants in a value. It's a clear and machine independent structure of

  • Basic Representation : BINARY / DECIMAL
  • Sign handling: ​SIGNED / UNSIGNED
  • Scaling: FIXED / FLOAT
  • Mode: REAL / COMPLEX
  • Precision as number of digits

The result is a machine independent definition that allows portage of programs between vastly different architectures without the need to rewrite anything - seems great for a HLL, doesn't it?