29

Operating system development tutorials pinpoint reaching screen data by writing directly to VGA or EGA or Super VGA, but what I do not get is what is the real difference between writing to a fixed address for display, and writing to a video card directly, either onboard or removable? I just want the basic clarification of my confusion on this on my issue

And since it's not such a simple case with variables in cards, connective-interfaces, buses, architectures, system on a chip, embedded systems, etc., I find it to be hard to find a way to understand the idea behind this 100%. Would the fixed addresses differ from a high-end GPU to a low-end onboard one? Why and why not?

It is one of my goals in programming to host a kernel and make an operating system, and a farfetched dream indeed. Failing to understand the terminology not only hinders me in some areas, but makes me seem foolish on the subjects of hardware.

EXTRA: Some of these current answers speak of using the processors maximum addressable memory in the specifics on 16-bits. The problem is some of these other arising issues:

1.What about the card's own memory? That would not need system RAM for screen data itself.

2.What about in higher-bit modes? And can't you not neglect BIOS in real mode(x86)and still address memory through AL?

3.How would the concept of writing to a fixed address remain unchanged on a GPU with multitudes of registers and performance at or above the actual microprocessor?

2
  • For a little historical context, check out my answer to a related question: superuser.com/questions/357328/… Commented Jan 24, 2013 at 23:31
  • It should be noted that, in addition to referring to the display card technology/protocol, the term have come to designate particular electrical standards and display resolutions. It's hard to guess which meaning is being applied, even when you see the terms "in context". Commented Jan 26, 2013 at 17:52

4 Answers 4

65

Technically VGA stands for Video Graphics Array, a 640x480 video standard introduced in 1987. At the time that was a relative high resolution, especially for a colour display.

Before VGA was introduced we had a few other graphics standards, such as hercules which displayed either text (80 lines of 25 chars) or for relative high definition monochrome graphics (at 720x348 pixels).

Other standards at the time were CGA (Colour graphic adapter), which also allowed up to 16 colours at a resolution of up to 640x200 pixels. The result of that would look like this:

enter image description here

Finally, a noteworthy PC standard was the Enhanced graphics adapter (EGA), which allowed resolutions up to 640×350 with 64 colours.

(I am ignoring non-PC standards to keep this relative short. If I start to add Atari or Amiga standards -up to 4096 colours at the time!- then this will get quite long.)

Then in 1987 IBM introduced the PS2 computer. It had several noteworthy differences compared with its predecessors, which included new ports for mice and keyboards (Previously mice used 25 pins serial ports or 9 pins serial ports, if you had a mouse at all); standard 3½ inch drives and a new graphic adapter with both a high resolution and many colours.

This graphics standard was called Video Graphics Array. It used a 3 row, 15 pin connector to transfer analog signals to a monitor. This connector is lasted until a few years ago, when it got replaced by superior digital standards such as DVI and display port.

After VGA

Progress did not stop with the VGA standards. Shortly after the introduction of VGA new standards arose such as the 800x600 S uper VGA (SVGA), which used the same connector. (Hercules, CGA, EGA etc all had their own connectors. You could not connect a CGA monitor to a VGA card, not even if you tried to display a low enough resolution).

Since then we have moved on to much higher resolution displays, but the most often used name remains VGA. Even though the correct names would be SVGA, XVGA, UXGA etc etc.

enter image description here

(Graphic courtesy of Wikipedia)


Another thing which gets called 'VGA' is the DE15 connector used with the original VGA card. This usually blue connector is not the only way to transfer analog 'VGA signals' to a monitor, but it is the most common.

Left: DB5HD Right: Alternative VGA connectors, usually used for better quality) enter image description here


A third way 'VGA' is used is to describe a graphics card, even though that card might produce entirely different resolutions than VGA. The use is technically wrong, or should at least be 'VGA compatible card', but common speech does not make that difference.


That leaves writing to VGA

This comes from the way the memory on an IBM XT was devided. The CPU could access up to 1MiB (1024KiB) of memory. The bottom 512KiB was reserved for RAM, the upper 512 KiB for add-in cards, ROM etc.

This upper area is where the VGA cards memory was mapped to. You could directly write to it and the result would show up on the display.

This was not just used for VGA, but also for same generation alternatives.

  G = Graphics Mode Video RAM
  M = Monochrome Text Mode Video RAM
  C = Color Text Mode Video RAM
  V = Video ROM BIOS (would be "a" in PS/2)
  a = Adapter board ROM and special-purpose RAM (free UMA space)
  r = Additional PS/2 Motherboard ROM BIOS (free UMA in non-PS/2 systems)
  R = Motherboard ROM BIOS
  b = IBM Cassette BASIC ROM (would be "R" in IBM compatibles)
  h = High Memory Area (HMA), if HIMEM.SYS is loaded.

Conventional (Base) Memory:   
First 512KB (or 8 chunks of 64KiB). 

Upper Memory Area (UMA):

0A0000: GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
0B0000: MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
0C0000: VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0D0000: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0E0000: rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
0F0000: RRRRRRRRRRRRRRRRRRRRRRRRbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbRRRRRRRR

(Source of the ASCII map).

25
  • 1
    Nit comment: It's not a "DB15" connector. A DB15 connector has only 2 rows of pins (just like DB9 and DB25). The VGA connector has 3 rows of pins, and is often called "HD15" (HD for high density compared to DB) (although some assert that "HD15" is not an official name).
    – sawdust
    Commented Jan 24, 2013 at 21:04
  • 2
    Nit comment #2: The original IBM PC may have been released with a 512/512 split, but it was soon changed to a 640/384 split (this is referenced in your source page). Graphics memory starts at the 640K mark (hex 0A0000). I don't think anybody ever really became aware of a "512K boundary" in the way that the "640K boundary" eventually came to be a well-known issue.
    – Hellion
    Commented Jan 24, 2013 at 22:00
  • 2
    @sawdust: HD15 is definitely not an official name (but is as good as, these days). In the Dx-nn connector family, x is the size of the shell, nn is the number of pins. Shell B is the same size as a parallel port (or an old, full implementation 25-pin serial port). Shell E is the same size as the serial port. So technically, the VGA 15-pin connector would be DE-15, but this was never part of the original line-up of connectors. AFAIK it never even existed before IBM's use on the PS/2 MCGA, VGA & 8514/a. Wikipedia has a good explanation: en.wikipedia.org/wiki/D-subminiature
    – Alexios
    Commented Jan 24, 2013 at 23:15
  • 3
    I love the superlative inflation of resolutions: ultra-wide-super-extended-hyper-quad-graphics-array. You can't call it high-def, because one day it won't be!
    – aidan
    Commented Jan 25, 2013 at 1:20
  • 2
    @LackingConfidence if you want the performance the cards can offer, you need to use their individual proprietary interfaces. If you don't care about the performance, there is the VGA BIOS to set up a VESA framebuffer for you. Look at Linux's vesafb.txt for details (and of course source code as well in Linux).
    – derobert
    Commented Jan 30, 2013 at 21:05
10

Writing to a "fixed address" was essentially writing to a video card directly. All those video ISA video cards (CGA, EGA, VGA) essentially had some RAM (and registers) mapped directly into the CPUs memory and I/O space.

So when you wrote a byte to a certain memory location, that character (in text mode) appeared on screen immediately, since you in fact wrote into a memory located on a video card, and video card just used that memory.

This all looks very confusing today, especially considering that today's video cards sometimes are called VGA (and they have bear resemblance to "true" VGA cards from 1990s). However even modern cards emulate some of the functionality of these older designs (you can boot DOS on most modern PCs and use DOS programs that write to video memory directly). Of course, nowdays it's all emulated in video card's firmware.

4
  • So how would that make sense with an onboard video card? I still don't get how VGA can be the addreds if the card is not VGA dictated.
    – user192573
    Commented Jan 25, 2013 at 2:45
  • 1
    Even if you video card is integrated, it is still connected to the rest of the system via some kind of a bus: PCIe, PCI, AGP, ISA, etc. These buses can connect external components to the motherboard, and can connect internal components inside the chipset (SATA, video, etc.)
    – haimg
    Commented Jan 25, 2013 at 4:15
  • 1
    But how do the buses know what to do with the addresses? This will differ between PCI and onboard cards, GPUs even, or integrated graphics microprocessors.
    – user192573
    Commented Jan 26, 2013 at 1:01
  • 1
    There is no difference whatsoever, whether wires are routed to the PCI connector, or if all connections are inside your northbridge. en.wikipedia.org/wiki/Conventional_PCI#PCI_address_spaces
    – haimg
    Commented Jan 26, 2013 at 6:05
3

There isn't really a difference: if you're writing to the address of video memory, then the hardware will route that to the video card.

If you're writing your own operating system, you will probably have to do qute a lot of work in getting the graphics card to map its memory how you want, starting by scanning the PCI bus to find the card.

4
  • My graphics card is onboard on my northbridge, it is not PCI-connected. I think it's an Intel GMA.
    – user192573
    Commented Jan 25, 2013 at 2:43
  • 3
    Your graphics processor may not occupy a PCI slot, but it's certainly sitting on one of the system's buses... even if it's on the motherboard, heck even if it's integrated directly as part of a system-on-a-chip. The same way your motherboard's SATA controllers are, or USB controllers, or... You should see the onboard GPU listed (and SATA, USB, etc. controllers), along with its PCI ID, if you use a sufficiently barebones PCI-bus inspection tool for your OS. Under linux it's just 'lspci' on the command line. For Windows, I prefer Gabriel Topala's "SIW". Macs... might also have an 'lspci'?
    – FeRD
    Commented Jan 25, 2013 at 5:57
  • The OS doesn't matter in such a case, because ilthe hardware architecture and platform is what counts. The OS is just built atop that, and it is the main entry point to interact with all hardware thr kernel supports a service to. The architecture andnits specification is what you're interfacing with. To determine your hardware just look up your motherboard online. That's a fairlu easy start.
    – user192573
    Commented Jan 25, 2013 at 15:57
  • +1 starting by scanning the PCI bus to find the card.
    – n611x007
    Commented Sep 8, 2013 at 16:10
2

So far the answers have explained that old video cards worked by having having video memory mapped into the processor's address space. This was the cards own memory. The northbridge knows to redirect requests for this mapped memory to the VGA device.

Then on top of that there were amny expansions and new modes for VGA-compatible cards. This lead to the creation of VESA BIOS Extensions (VBE), which operate through int 10h. This supports basic 2D acceleration (BitBlt), hardware cursors, double/tripple buffering, etc. This is the basic method for full color display at any supported resolution (including high resolutions). This normally used memory internal to the card too, with the northbridge performing redirection like with classic VGA. This is the simplest way to utilize full collor/full-resolution graphics.

Next we some direct method of accessing the a GPU without using the bios, which provides access to the same features as VBE, and possibly additional ones. My understanding is pretty fuzzy here. I think this interface is device specific, but I'm not at all sure of that.

Then there is the GPU interface that can support 3D acelleration/GP-GPU computation etc. This definately requires manufacturer provided drivers or specifications for full use, and frequently there are substancial differences even between devices of the same manufacturer.

6
  • A devce driver is only necessary on an operating system. All hardware can be accessed directly on the architecture.
    – user192573
    Commented Jan 26, 2013 at 1:05
  • 1
    Sure, the problem with direct access for the 3D portions is that substancial portions of the protocol are considered trade secrets by some of the major GPU manufacturers, and thus unless reversed engineered, or a non-disclosure agreement is signed, a driver that already contains said knowledge is needed. Commented Jan 28, 2013 at 15:20
  • Depends on the card. If it's nVidia or AMD it's going to be proprietary. An onboard Intel GMA would be very much easier than an nVidia GEForce.
    – user192573
    Commented Jan 29, 2013 at 5:14
  • 1
    I've updated the line in question to reqad "divers or specifications". Specifications are sufficent when you can get them, which is indeed the case for many recent Intel graphics solutions. Commented Jan 29, 2013 at 17:23
  • 1
    For what it's worth, all modern Intel and most AMD graphics cards have very large swaths of their programming specifications published. Nvidia still remains silent on the issue, but the Nouveau open source graphics driver contains a lot of documentation (in the form of source code) on programming Nvidia graphics cards. Intel/AMD/Nvidia are more open than proprietary ARM ASICs these days; the embedded/mobile chips are the most secretive of all. Commented Feb 6, 2013 at 18:20

You must log in to answer this question.