11

I have read a fiction novel in which a manufacturer in the 80s provided GUI by adding a dedicated drawing hardware besides videocard or by extending videocard.

Fictional as it is, is this possible or even based on reality? This hardware I guess should take control of IO, draw the border of window and locate text and icons(but not images which would take too much RAM thus making it expensive), and manage z-order and redrawing, thus free the cpu from most of the interactive interruptions.

11
  • 12
    There is nothing special about having a second CPU handling drawing to relax constrains on the main CPU. Has been done many times. All the way form secondary Z80 to dedicated GPUs. Not to mention hardware support for graphics primitives like line and area drawing or bit-blitting. Windows 3.x/95 could gain quite speed from graphics cards doing primitives on their own.
    – Raffzahn
    Commented Mar 27, 2021 at 11:07
  • 2
    @Raffzahn It is not that using a second processing unit dedecated to graphics was surprising, but RAM was more expensive than CPU in early 80s, thus you have to store the window in some kind of graph description languages and draw them not per frame (there is no room for the rendered raster), but per scanline.
    – Schezuk
    Commented Mar 27, 2021 at 11:24
  • 8
    @Schezuk Not really sure what you want to say, but RAM in sizes used for screens wasn't overly expensive, as screen resolution was rather low back then. A high end workstation in 1980 ( >20 kUSD price tag) had a resolution of 1024x768 (at maximum) in B/W. which fits into 96 KiB of RAM - that's 48 4116 chips at ca. 3 USD (in 1980), so less than 150 USD, or less than a percent of the System price. A high res colour screen of an upper end graphics workstation at that time would be like 512x384x16 which again fits into the same amount of money. It's easy to overestimate capabilities from back then
    – Raffzahn
    Commented Mar 27, 2021 at 11:43
  • 5
    @Schezuk 320x200 is less than 32 KiB. That's less than 20 USD in 1984 when the Tandy was introduced. More important here, the question was about systems with 'intelligent' graphics controllers, right? That is nothing to be found in cheap systems like a Tandy would have. That's something only added to high end workstations, not home computers.
    – Raffzahn
    Commented Mar 27, 2021 at 12:09
  • 2
    GUIs were already a thing in the early eighties (Apple Lisa, Xerox Star...), hardware accelerated or not. Commented Mar 29, 2021 at 8:48

10 Answers 10

23

In the early 1990s, with the rise in popularity of Windows 3, the PC world got a number of "Windows Accelerator" video cards. These were 2D fixed-function GPUs with command-sets that supported things like drawing lines, drawing solid-color rectangles, copying a rectangle from one location to another (useful for moving windows), and sometimes even things like storing an entire font or icon set in memory on the card so that a line of text or a toolbar could be drawn to the screen in a single command. Drivers that translated Windows' drawing primitives to commands the card understood could greatly reduce the CPU time spent on graphics and the amount of data that had to be sent over the bus (which was pretty slow at the time; most people were limited to 16-bit ISA).

3
  • 2
    Those "Windows Accelerator" cards would often also have a hardware mouse cursor, which would render above the rest of the framebuffer (much like sprites on the C64 and other platforms).
    – Jonathan
    Commented Mar 30, 2021 at 8:14
  • 1
    @Jonathan yup, saves having to repaint everything after the cursor isn't over it anymore :)
    – hobbs
    Commented Mar 30, 2021 at 16:50
  • 2
    Might be worth worth noting that that kind of technology was fully absorbed by mainstream computing, and generalized even further, leading to the modern GPU in the late '90s. Today, even the cheapest tablet displays things that way. The processor does not literally set every pixel in the framebuffer to display a web page. The main program constructs lists of instructions for shuffling around bitmaps and sends those off to the GPU. These days, even complex tasks like JPEG decompression and font rendering may be handled by the GPU. But these are old techniques, just now cheaply available.
    – RETRAC
    Commented Mar 30, 2021 at 18:15
19

I imagine the TMS34010 could have handled all of that, to beyond the standards of an ‘80s GUI; it is a combination CPU and GPU for the mainstream consumer market first released in 1986.

It and its descendants are known for powering pioneering arcade games such as Hard Drivin’ and Mortal Kombat, but it was also available as a putative video card standard for PCs as the TIGA; in that form, it was used as a CAD and Windows accelerator, though Windows versions of the era offloaded only the GDI functions (i.e., the pixel manipulations and plotting, but not the total layout and window-level logic).

15

Silicon Graphics started to provide dedicated graphics terminals and workstations in the 80s. So the technology was definitely there, even for 3D.

In 1991, they put this technology on a PC expansion card with IrisVision.

So if this was possible for a full 3D-pipeline in 1991, for the simpler hardware needed for GUI acceleration that should have been possible a lot earlier.

The main problem earlier would have been fast enough access to the RAM, so I'd imagine a second card for video RAM (including DACs etc.) with a direct connection to the first card would have been a possible solution.

And the Xerox Alto had a framebuffer-based GUI back in 1973, with some assistance in the microcode. Though not on an expansion card.

So, very much possible, and somewhat based on reality (though not in this exact variant).

2
  • What about storing graph description languages in the video RAM and render them per scanline? RAM is expensive in early 80s thus framebuffer is not affordable on PC.
    – Schezuk
    Commented Mar 27, 2021 at 11:35
  • @Schezuk you can't do that fast enough for 3D ... though it may work for an UI of a single application. Without graphics. But as I don't know of any actual extension card for UI rendering, I have no idea what approach they'd settle on... and 64K on an extension card was doable before the PC era (e.g. CP/M card for the Apple II).
    – dirkt
    Commented Mar 27, 2021 at 12:54
14

A simple form of this was embodied by the BBC Micro of all things, when paired with a Second Processor. The two were connected by a remarkably simple interface known as the "Tube".

In this configuration, the BBC Micro itself became an "I/O processor" handling keyboard input, display output, and all tape/disk/network functions, while the actual application code ran on the Second Processor - which need not be a second 6502, though it often was. Famously the ARM1 was mainly used as a Second Processor module for developing software that would later be used in the ARM2-based Archimedes computers.

As far as the display is concerned, the "Tube" interface connecting the two processors provided a dedicated byte stream buffer, through which text and control codes could be sent easily. The I/O processor would interpret them identically to if the same bytes had been sent to the VDU routines by a local program, so all the usual graphics routines were available - including functions to restrict text and graphics drawing to "windows" of less than the full screen.

Executing these routines could proceed while the Second Processor went on to do its own processing, which could be a significant acceleration.

Although the Tube interface was already available in the original BBC Micro of 1982, it is most commonly associated with the BBC Master of 1986. It could be bought as a pre-accelerated system - the BBC Master Turbo - with a double speed 65C02 Second Processor mounted on a small card inside the case, rather than in a separate case alongside. You can even play with one in your browser.

3
  • Note that the "tube" connector on the BBC micro was just a connection to the processor bus. The actual tube chip was in the second processor unit. It looks like it was released in 1984. As I understand it in addition to a "DNFS" rom containing among other things code to support the tube, it also required the BBC micros operating system to be version 1.2. Commented Mar 29, 2021 at 11:56
  • Not really an answer. While this could have been done, from memory the Tube was used for a 2nd CPU not for advanced GPU offload which is what the OP asks.
    – Stilez
    Commented Mar 30, 2021 at 5:05
  • 1
    @Stilez Once the 2nd CPU was attached, the original computer became the GPU and I/O offload engine. The Z80 version was also commonly used with an actual desktop GUI, though a relatively primitive one.
    – Chromatix
    Commented Mar 30, 2021 at 5:18
13

The Amiga GUI was using the hardware's custom chipset (the 'GPU', if you will) to assist in drawing lines or blitting rectangles. That was back in 1985.

I'm not sure any functionality beyond that would have made much sense in the eighties: Due to the low resolutions and the lack of colors, GUI elements were extremely simply - mostly monochrome, only few pixels/lines per element.

On top of that, systems didn't support multitasking (Windows, Mac OS), were limited by RAM so their multitasking abilities were often theoretical (Amiga) or mostly used the GUI to manage a bunch of shell windows and a clock due to lack of actual GUI applications (Unix). It's not like having a very fast GUI that doesn't eat up CPU ressources is your top priority if you're only running one application at a time.

1
  • 2
    The Blitter had also full access to the lower 512 (later 1024)KB of the RAM (the "Chip-RAM") by using 4 DMA channels and running under full steam ("nasty" mode) he even can shutdown CPU access to this lower RAM for the duration of the blit operation. Commented Mar 29, 2021 at 9:21
4

I think what you’re describing is a GPU, so yes it’s both possible and commonplace. I don’t recall it being done widely until the 90s though.

1
  • 1
    As mentioned above, Amiga and Atari ST had blitters to accelerate rectangle moving and drawing already mid 80s. Commented Mar 29, 2021 at 12:37
4

Yes. It wasn't that long ago that stand-alone X-Windows terminals existed. They would allow clients to display on them that ran on server hardware that was used in common by the team but were controlled by individual team members. We used those at IBM Federal Systems Division in the 90's where the server ran AIX.

Years before that, Digital Equipment Corporation developed GiGi terminals as a lower cost alternative to the ridiculously expensive Textronix graphics terminals of the day. In all cases, the "terminals" were simpler computers in their own right that provided vector-based graphics capabilities on raster display hardware.

3

Sketchpad from 1963 would qualify.

In general, the framebuffer's ability to display "anything" is just too danged useful to make a GUI without it. Using a system without a framebuffer to make a GUI without needing much memory would mean that you need a very fast graphics processor, and that you have to accept limitations to the complexity of the user interface because you have to be able to draw every scanline within the time it takes to display that scanline.

Imagine, for example, the graphics of an NES. It has fixed tilesets, and a fixed number of dynamic sprites that can be displayed with the hardware in a frame. So, you can build a GUI on an NES, but you can't have arbitrarily many overlapping windows at arbitrary positions on that type of hardware the way you would expect on something like a Macintosh or an Amiga. If you tried, the graphics chip would have a potentially unbounded amount of work to do in a bounded amount of time to display that scanline, so it wouldn't work.

If you abandon a raster display and move to a vector display, you get much more flexibility in the timing budget for the UI. This is how Sketchpad worked. It used an oscilloscope as a vector monitor, rather than a TV style raster display. Thus, it could theoretically display arbitrarily complex onscreen drawings without a framebuffer, at the cost of an increasingly flickery image. There's a demo at this Youtube video: https://www.youtube.com/watch?v=hB3jQKGrJo0

Of course, the very first thing that Ivan Sutherland worked on after the Sketchpad vector UI was framebuffers. (He's the Sutherland in Evans and Sutherland, the company that sold the first commercial framebuffers.) Because having enough RAM for a raster image was obviously useful even in the earliest days of computer graphics.

In any event, the cost of custom vector displays, or custom hyperfast GPU processors, seemed impractical in the economics of the 1980's. RAM was expensive by modern standards, but GUI's of the time didn't use all that much of it. The original Macintosh famously only had 128 KB in 1984, and that was enough to be useful for real world GUI applications, and reasonably priced by the standards of the time.

1

As far as i remember, the BBC Micro in the 1980s used a separate dedicated chip to handle and offload its "Teletext" video mode. But that mode only had extremely limited graphics options, and may not meet the OP requirements.

Edit

I'm also fairly sure that most early machines simply didn't have a concept of z layering or compositing, in their graphics handling. There was a flat 2D raster grid screen, and you painted what you liked on it. If your program wanted to present things as if layered, that was something any program was welcome to implement for itself.

2
  • 1
    Isn't this the same as Chromatix's previous answer?
    – DrSheldon
    Commented Mar 30, 2021 at 7:02
  • 1
    No. The base model.had a teletext graphics offload chip (on-pcb hardware), nothing to.do.with the tube available for the same system
    – Stilez
    Commented Mar 30, 2021 at 7:35
0

It's entirely reasonable, and in fact it was done- if my understanding is correct- by the few terminals that supported the AlphaWindows standard.

Again provided that my understanding and recollection are correct, this was very much like an X11-based Window Manager in that programs running on a remote minicomputer sent the commands to draw the content of windows, but the terminal itself drew the furniture- outline and title bar- and allowed windows to be moved around the screen.

The standard didn't amount to anything, at least in part because the companies involved were very close with their information and had little-to-no interest in telling developers etc. how to make use of the proposed facilities.

2
  • Here it says "it lets you run your current text based applications in a motif like windowing environment without re-writing the applications", so that looks like you get something like xterm's in a windowing environment, and not with an add-on PC card, but inside a dedicated terminal (which probably presents a multiplxed serial interface to a unix-like machine). Not exactly what the OP was asking for ... but interesting nonetheless.
    – dirkt
    Commented Mar 29, 2021 at 12:25
  • And the dedicated terminal /is/ hardware, which is that OP was asking. I only came across them peripherally (so to speak), and I think they were being promoted as potential wrappers around the text sessions which were dominant in those days. I've certainly come across a PC in a similar role, it was in a maintenance office and had icons for 20 or 30 terminal emulator sessions to different mainframes etc. on its screen. Commented Mar 29, 2021 at 14:04

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .