11

'Blue Magic: The People, Power and Politics Behind the IBM Personal Computer' is an excellent book, but it makes one claim I cannot quite make sense of. Page 13 of the hardback edition says:

"Lowe soon realized that the success of IBM's projected line of personal computers would depend on their compatibility with the company's established line of medium-sized and mainframe machines. IBM was obviously in a position to make a linkup of machines a reality; Apple was not. Because IBM had extensive patent protection on its machines, Apple was restricted to operating within its smaller product universes."

The IBM PC was in no way compatible with the company's mainframes. The CPU instruction set, hardware architecture and operating system were all completely different; the programming languages were assembly, Basic and Pascal versus FORTRAN, COBOL and PL/I; you couldn't even use it as a terminal because while it had a serial port, 360 series terminals used a weird proprietary connector.

As far as I'm aware, Apple never seriously contemplated a mainframe connection, because Steve Jobs didn't see the value; he famously remarked at one point that he saw no reason why you would need an umbilical cord to connect your computer to your company.

Then again, none of the other microcomputer companies did anything along those lines either. Was that just because mainframes were a different universe and they didn't think of crossing the chasm, or were there relevant patent barriers?

15
  • 4
    Although IBM did call its PC division “Entry Systems,” the IBM PC (AKA the model 5150) couldn’t even read EBCDIC data files. On the other hand, the IBM 5110 and 5120 were a lot more mainframe-compatible. Is that what the book is referring to?
    – Davislor
    Commented Dec 9, 2018 at 21:30
  • 3
    I imagine a lot depends on what is meant by "linking up". If you want the PC to be a 3270-style terminal connecting to a 3274 controller (do I recall that number correctly?) that's one thing. If you're prepared to have the PC implement SNA, so it can be terminal and controller, that's another. The latter has no patent restrictions; or at least no-one ever came after me :-)
    – dave
    Commented Dec 9, 2018 at 21:43
  • 3
    SDLC to a 3705/3725 front-end comms processor, using existing sync line cards. I suppose that was also how remote-site real 3274s connected. I should confess that my interest was not PCs but various pieces of DEC equipment.
    – dave
    Commented Dec 9, 2018 at 22:36
  • 3
    @WalterMitty That was the hype. In reality, Jobs worked with both Microsoft and IBM when he needed to.
    – Davislor
    Commented Dec 10, 2018 at 1:53
  • 4
    “at least no-one ever came after me :-)” I’m afraid you can’t do that, @dave
    – Davislor
    Commented Dec 10, 2018 at 1:59

3 Answers 3

11

(I have to admit, my first impulse here was to have the question closed for being unspeakably broad - then again, I guess RC.SE isn't as narrowly focused, and more important, it gives a chance for a nice rant :))


It would be quite helpful to narrow this question down, not only to a single topic (or close group), but also to add some reasonable reference for the claim it implies. Also, some research about the claims added (like Assembly being a PC thing - whereas it was and still is a relevant Mainframe language - after all, being 100% assembly language compatible back to the 1960s is still a major sales point for new mainframes - and that's not only in some emulation like on PCs, but down to the user visible ISA and OS interfaces).

Compatibility can mean several things from 'just' connecting as a terminal all the way to running applications of the system it is compatible with. And all of that had already been reached with microcomputers early on.

Running Application?

Many commercial applications on mainframes were written in COBOL, and next to all mini and micro manufacturers offered some COBOL environment as early as possible - or welcomed third party suppliers. Like Micro Focus, a company that's still around, which was formed in the mid 1970s to supply IBM or standard compatible COBOL.

Their CIS COBOL (*1) was available on next to everything from a 6502 Apple II and 8080 system under CP/M to the same with MP/M and PDP-11 with RT11 as well as Unix on other CPUs. In addition they offered several libraries and tools (like FORMS and FILESHARE) that offered interfaces compatible with IBM products for data or terminal handling.

Then again, applications on mainframes were seldom about a single user with a single loaded application working on private data (which is the PC mantra) but dozens or even hundreds of users working with the same data in a single application. Something the mini and micro world just did not supply in hardware.

There was no way to connect hundreds of terminals to a micro (or mini) and have them operate at reasonable speed over long distance lines. Similarly, adding storage capabilities of mainframes (as in several tapes and disks in 100+ MiB range) would soon push the cost into the same region as the mainframe would be - but adding cost to maintain a second incompatible hardware base and a second software version. Not easy to sell to management.

Sure, there was always some lonely but important accounting guy as a single user, and these applications had a good chance to be moved, and small multi user systems could handle some of the absolute lower end workload - the reason why Micro Focus focused more on multi user systems (MP/M, RT-11 or Unix) than single user micros.

Lesson 1: Transferring applications to minis and micros wasn't so much a hurdle, as it was was about missing large scale capabilities of mainframes

Replacing Terminals?

Sure, this has been done many times. 3270 family emulations with according communication cards have been available for many (somewhat professional) micros and next to all minis. Again, this was more of a market for third party offers than from the machines' makers. Like with application support, customers not only wanted hardware, but a plug and play solution, where the focus is on support of their special needs.

Heck, there was so much of a market, that even Apple itself created the Apple Communications Protocol Card (*2) to join the 3270 emulation, as well as the 2780/3780 remote job entry market. With a price of 700 USD for the hardware and 300 USD for the emulation (*3), Apple did cut a nice slice of the profit.

Again, as with the application, doing the hardware (or compiler) is just a tiny part of the job. To make a micro a viable option to replace a genuine (or third party) 3270, it did need a very specific application environment and adaption. Like the ability to load data prepared in a VisiCalc sheet into some existing mainframe application or more often leech mainframe data to be incorporated into local applications. Which again is something only a few people need in very specific settings. Good, but not big business - and not really visible either.

Or turning the micro into a cashier system with direct inventory booking to a mainframe, effectively making it a very special terminal, which IBM did not deliver, or a kind of a decentralized subsystem (see below) - there's much room for definition where one ends and the other starts.

Lesson 2: Use as terminal or basic RJE has been done with many micros and all the time - it was just not as visible as some joystick sale.

Micros as 'Partner' Systems?

One very remarkable part about IBM is that they always worked with open interfaces. Each and every possible interface was well defined (*5) and usually not covered by any patents. At least not during the 60s and 70s when the essential mainframe interfaces were defined. Heck, the whole plug compatible industry was based thereon. Just because the later minis and micros used less sophisticated interfaces and/or didn't bundle interface and protocol in a tight fashion as IBM did doesn't make them less standard or less common. If we ignore the standard channel interface (aka the plug on the mainframe), this meant for most part serial interfaces based on SDLC/HDLC (*5) or but also Bisync (*6), which is basically 3270 (*7).

SDLC and Bisync are framing protocols, SDLC also with addressing capabilities, which means that multiple connections and partners can communicate over a single line. Think of it as similar to IP (*8). By handling an SDLC interface, all it needs is a VTAM-alike API for an application to enable communication between a micro application and some running on a mainframe.

Based on this, micros can easy replace classic RJE stations, as well as 'outsourcing' smaller application (using a compatible COBOL compiler), access central databases, but more so build high level client server systems with the micros handling most local I/O while the host does the processing. After all, client-server is no different from intelligent terminals anyway :))

Applications like this have been made already starting in the 70s with minis and later micros. IBM's VTAM was, when introduced in 1974 (?), right away called for such connections to be made. And third party developers delivered. The IBM PC had SDLC/HDLC (and Bisync) cards and drivers available from day one - one of the hidden reasons for its success - it could deliver connectivity out of the box - which in turn was way more important than speed or any other feature for professional applications in a company environment.

Again, like with before, these are applications that popular magazines didn't talk about a lot, and suppliers who didn't advertise in Byte or alike. No wonder it's unknown to everyone who grew up outside the mainframe world.

Lesson 3: Decentralized processing using mainframe couplings were quite common during all time of mini and micro usage, just not as visible to Joe Basementhacker.

Bottom Line

While the IBM-Mainframe-Compatible Market is huge and turns a lot of money, it is also way different from what hardware-only manufacturers experience. It's all about solution and integration. After all, that's why IBM is still a big number. For their customers it doesn't really matter what code is used, what plugs or if bits are numbered from left to right or vice versa. They want solutions, and if IBM provides the base to run their applications in their environment it doesn't matter if it's done on a classic /370 mainframe or in a Linux cluster.

It was a less public market - microcomputer people didn't have the most contacts or inside information - much like the majority of people in the 80s had no idea what these home computers were good for, their kids (and peers) did interact with them.

For microcomputer manufacturers who focus mainly on hardware this is no market at all. Sure, they could earn good money per unit, but they would have to spend some effort on selling a few hundred or maybe thousand machines to a customer that could at the same time produce a million to a less demanding public. And that's where they where heading.

The 'Micro-on-Mainframe' Market is what smaller, quite valuable third party companies managed. This market is again so special, that an average micro could have been constructed and produced as part of a software project. Which happened multiple times.

It's about the solution, stupid


*1 - Compact Interactive Standard COBOL

*2 - No, not the Apple Communications Card (A2B0003) which was the first serial card for the Apple II, but the Apple Communications Protocol Card (A2B2070).

*3 - 1984 price - at that time a genuine 64 KiB IIe with 80 col card and one disk drive was already less than 1200 USD - not to mention any well stuffed clone

*4 - well, mostly that is, or better it was well defined, but IBM itself didn't always follow the specs. I remember one case of an SDLC line where one of our customers always had issues of interrupted transfers. Highly sporadic. IBM didn't spend much time or effort to search for problems. They replaced the buffer (IBM term for interface card) twice, and since the protocol was HDLC and they where IBM, it was stated that the problem must be on the other side - after all, we were a less reliable third party.

Our buffers were fine, a brand new development using an 8080 system. Quite powerful for back then. So we had to sit down for more than two weeks with extensive line debugging equipment (keep in mind, these are the late 70s, even high class protocol analyzers could only save a few KiB of data) until we could reproduce the situation. And who would have guessed, it was an error in IBM's implementation of their very own SDLC protocol. So much for the 'standard' part.

*5 - SDLC - Synchronous Data Link Control; HDLC - High-Level Data Link Control. A protocol with framing over various serial lines an offering addressing capabilities, so communication was not just between the endpoints, but multiple connections can be handled via a single line. Think of it similar to IP - in fact, chances are good that your DSL/Cable modem uses some PPPoE, where the PPP part is nothing else than a subset of SDLC :)))

*6 - What is most remarkable from today's perspective is the similarity of Bisync and USB. Except for line turn and character stuffing, the Bisync protocol is exactly like USB, starting with the SYNC sequencing all the way to alternating ACK0/ACK1 packets.

*7 - Or more correctly, Bisync is the base for RJE or 2780/3780

*8 - In fact, chances are good that your DSL/cable modem uses PPPoE to connect, where the PPP part is nothing else than a subset of SDLC :)))

3
  • 1
    Great material, but the original question (most notably in its title) was asking about connection barriers due to IBM patents. Are you saying there were none? I agree with that as far as connections via SNA or earlier protocols, and I don't know what else might be in the book-author's mind.
    – dave
    Commented Dec 10, 2018 at 12:45
  • 2
    @dave I am not aware of any patent issue that would have stoped to connect PCs or transfer aprilcaitions toward minis or micros. It would be more desirable if the OP could add some relevant examples when implying that there where any - as with the title - but then again, the 'question' implies so many points without naming a single, making it rather pointless (SCNR)
    – Raffzahn
    Commented Dec 10, 2018 at 12:55
  • 1
    re "Assembly being a PC thing - whereas it was and still is a relevant Mainframe language" -- except that PC assembly is nothing like /360 assembly. I am forever confused why people talk about assembly language as if it's one language. "Assembler" just denotes the level of the language.
    – dave
    Commented Apr 27, 2022 at 12:17
5

I always thought that the biggest hurdle was getting a legal operating software. A huge mainframe is worthless without a supported operating system, and those were often bundled with the hardware and not sold (or leased) separately.

At least that was what I was told hanging out at the University Data Center in 1981.

1
  • 2
    Based on what I've seen/heard with the Hercules emulator I would tend to believe this answer... Commented Jul 18, 2019 at 19:10
5

The UBC Computer Centre had PDP-11s as front ends to its S/370 (and later Amdahl) computers; as far as I know there were no patent issues. There was also a company called Auscom that built interfaces: here's an advert.

My recollection is that part of the Teletype anti-trust settlement, IBM was required to provide interface specifications. (At one time, a friend and I were considering starting a company to build S/370-compatible peripherals, but that was around 1980+/- and memory fades as to the details.)

I recall having the entire source for MFT on microfiche (written in assembler and PL/S), also the source for the PL/1 compiler. I was told that the MTS operating system used the core pieces of OS/360 MFT for its 1st and 2nd level interrupt handlers, time-slicing, I/O and disk layout (VTOC etc) and that was roughly 16KB.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .