5

First, let's make sure we're all on the same page:

As a bit of background information, please know that when you start up an operating system -- any operating system -- the BIOS (or UEFI GOP) tells the operating system which graphics adapter should be used as the primary framebuffer. The primary framebuffer is basically a region of virtual memory that, when written to, causes the graphics driver to initiate a DMA transfer to send frame data to the graphics output device. Clearly, in a multi-GPU configuration, it's not this simple, but the general idea is, the operating system at a basic level is only aware of one single framebuffer. For the purposes of deciding what constitutes a framebuffer, monitors plugged into the same graphics card are considered to be driven by the same framebuffer. Monitors plugged into different cards are, by default, driven by different framebuffers. There are several technical tricks in place today that help to bridge the hardware gap between different framebuffers; hence my question...

Assume that you have purchased more monitors than you have ports on any one of your graphics cards. For instance, if your graphics card has one port, you have two monitors. If your graphics card has two ports, you have three monitors. And so on.

Also assume that you do not want an Eyefinity or similar setup, where the operating system treats all the monitors as one "big monitor".

You do want to be able to drag the mouse, and windows, between different monitors, seamlessly.

Ways to do this:

  1. Physical graphics card bridging: Nvidia SLI or AMD CrossFire. These solutions will allow you to plug your "extra monitor(s)" into a second discrete graphics card. The two graphics cards communicate between one another using a dedicated bridge hardware (or in the case of the latest generation AMD Radeons, using the PCIe bus).

  2. Platform hardware-assisted framebuffer sharing: Nvidia Optimus, AMD Enduro, LucidLogix Virtu MVP... the concept is the same. You have monitor(s) plugged into one card (usually the motherboard, for using the processor's iGPU), and monitor(s) plugged into a discrete graphics card. Some chip on the motherboard helps coordinate and synchronize these two separate graphics cards so that the operating system has the illusion of only one framebuffer, and thus you are able to have your multi-monitor setup. Note that some of these solutions can also control which GPU the frames are rendered from, not just the location that the output frames are rasterized to.

  3. Software?: If neither of the first two hardware solutions are available, apparently there is still a way to do it. For instance, if your motherboard does not have Nvidia Optimus or LucidLogix Virtu MVP; and your cards are not in SLI; you can still take, say, an Nvidia GTX 280 and an Nvidia GT 210, plug them into the same machine, and you'll get the same result in terms of multi-monitor user experience. You can move your mouse and windows between the monitors seamlessly.

My question is, in the third option above, "Software?", how the heck does that work on Windows? Also, what is that particular mechanism/feature called?

  • Is it a feature of the vendor-specific graphics driver?
  • Is it built into Windows itself?
  • What is the darn thing called?

2 Answers 2

2

I think you are confusing 'framebuffer' and 'one big monitor'. You are also assuming that userspace can directly access the framebuffer on the graphics card.

Assume the most basic of windows: a simple image. To the OS, this is simply a block of main memory in which userspace can draw pixels. When signaled by the OS/windowing system, the graphics driver copies these pixels into the framebuffer of the graphics card, in the right space/order.

In pseudo-code:

int* MAIN_FRAMEBUFFER;
int RESOLUTION_X, RESOLUTION_Y;
struct WINDOW = {
  int x, y;
  int width, height;
  int* pixels; // virtual framebuffer
}
WINDOW[] all_windows;
int nr_of_windows
void draw_all_windows() {
    for(int i=0; i<nr_of_windows; i++) {
       // transfer the virtual framebuffer of every window to the graphics card
       WINDOW w = all_windows[i];
       for(int y=w.y; y<w.y+w.height; y++) {
           memcpy(&w.pixels, &MAIN_FRAMEBUFFER + w.x + y*RESOLUTION_X, w.width);
       }
       // Draw window manager decoration
       ...
    }
}

A graphics card driver could allow you to do this memcpy operation much more efficiently, e.g. by simply copying the WINDOW struct into graphics card memory, and doing the complicated line-by-line copy in hardware.

In a multi-monitor setup, you simply repeat the process for every graphics card, only copying the part of the window which is displayed on that specific monitor.

Again, in pseudo-code:

int* MAIN_FRAMEBUFFER1;
int RESOLUTION_X, RESOLUTION_Y;
// assume FRAMEBUFFER2 is to the right of FRAMEBUFFER1, and they have the same resolution
// the pseudo-code can be extended to allow a different resolution and orientation
int* MAIN_FRAMEBUFFER2;
struct WINDOW = {
  int x, y;
  int width, height;
  int* pixels; // virtual framebuffer
}
WINDOW[] all_windows;
int nr_of_windows
void draw_all_windows() {
    for(int i=0; i<nr_of_windows; i++) {
       // transfer the virtual framebuffer of every window to the graphics card
       WINDOW w = all_windows[i];
       for(int y=w.y; y<w.y+w.height; y++) {
           if(w.x + w.width < RESOLUTION_X) {
               // fully on monitor 1
               memcpy(&w.pixels, &MAIN_FRAMEBUFFER1 + w.x + y*RESOLUTION_X, w.width);
           } else if (w.x > RESOLUTION_X) {
               // fully on monitor 2
               memcpy(&w.pixels, &MAIN_FRAMEBUFFER2 + (w.x - RESOLUTION_X) + y*RESOLUTION_X, w.width);
           } else {
               // split between monitor1 and monitor2
               int monitor1_w_width = RESOLUTION_X - w.x;
               memcpy(&w.pixels, &MAIN_FRAMEBUFFER1 + w.x + y*RESOLUTION_X, monitor1_w_width);
               memcpy(&w.pixels + monitor1_w_width, &MAIN_FRAMEBUFFER2 + monitor1_w_width + (w.x - RESOLUTION_X) + y*RESOLUTION_X, w.width - monitor1_w_width);
           }
       }
       // Draw window manager decoration
       ...
    }
}

You could say that this system is complicated. And indeed, it has been simplified by using DirectDraw, which allows you to immediately allocate a block of memory in the framebuffer of the graphics card. However, this block is tied to one graphics card, which is why you see a green screen when you drag a media player using DirectDraw to your second monitor.

Note: I have no clue how this works in Windows 7 or other 3D window environments. I know they write the window framebuffer in a texture and render it in 3D. Maybe the texture is copied to the other graphics card when you move the window?

Finally, you refer to SLI or other systems. This has nothing to do with 2D rendering. What happens is that you pool together the memory of both cards. Then, you instruct each GPU to only render a part of the screen. The second graphics card (where no monitor is attached) writes the result of its calculations (pixels!) into the framebuffer of the primary graphics card, after which the VGA or DVI chip pushes this to the monitor. The reason a bridge is required is 1) to only copy texture and model data once, even though it is used by both GPU's and 2) to allow the second GPU to write pixels to the framebuffer of the first.

You also refer to Nvidia Optimus ea. That system is actually very similar to SLI. However, the bridge is only required to allow the second GPU to write pixels to the framebuffer. The chips do not share any texture or vertex data, and cannot render a 3D scene in collaboration.

4
  • 2d rendering? DirectDraw? This answer seems to have absolutely nothing to do with my question. You seem to be simply restating parts of my question without actually answering anything I asked. The code is not helpful. I asked how Windows (or any of its attendant drivers) deals with having different monitors plugged into different graphics cards; for instance, one plugged into the iGPU and one plugged into the dGPU, specifically in the case where there's no dedicated hardware to take care of passing data back and forth (which is what LucidLogix does). You didn't answer that. Commented Jan 22, 2014 at 21:29
  • And no, I'm making no assumptions about what userspace can do. Device drivers run in the kernel, so they are more than welcome to directly write to any framebuffers on the system. Now, as I understand it, each GPU has its own framebuffer, stored in its own VRAM (or the reserved system RAM substitute thereof, in the case of UMA). It is obvious from the fact that multi-monitor works in this case that something is coordinating the partial rendering of windows / graphics objects across two or more graphics cards' framebuffers. What is that something? Commented Jan 22, 2014 at 21:31
  • Upvoted regardless, due to all the effort you put into it... Commented Jan 22, 2014 at 23:54
  • I'm sorry, he did answer. Windows does not deal with multiple monitors. Windows deals with a single surface or multiple discrete, but ignores monitors entirely as the framebuffer is handled entirely by the GPU without Window's input or intervention. All windows does is dump a 2d bitmap to a particular location in memory that has pre-assigned as a 'hot zone' for an entirely unrelated framebuffer chip to grab the screen refreshes from. The problem is in your question. Your prior answers fail to recognize that Windows has no involvement in example 1 or 2 other than to say "start SLI" to GPU Commented Apr 7, 2017 at 14:55
0

Parasietje's answer is correct, but explained perhaps from the POV of someone assuming you can fill in the many gaps.

Windows in 2D does not deal with multiple monitors. Windows deals with a single surface or multiple discrete, but ignores monitors entirely as the framebuffer is handled entirely by the GPU without Windows' input or intervention. All Windows does is dump a 2d bitmap to a particular location in memory that has pre-assigned as a 'hot zone' for an entirely unrelated framebuffer chip to grab the screen refreshes from. This is why you still get display if you have no drivers. Your BIOS will configure VESA mode, defining a location in memory to dump to and nothing else. This is because a modern GPU is multiple devices to Windows. A or multiple display adapters + a rendering accelerator + a compute core.

The problem is in your question. Windows has no involvement in example 1 or 2 other than to say "start SLI" to the adapter.

To make it more explicit. Windows has 'Virtual Displays' and 'Virtual Display Adapters' The former can be a VNC or RDP connection, as they have nothing to do but pass that data once the image is created. This to Windows is the same as writing to that memory address, and is used for the final image.

The latter is simply a driver defined method for dumping that image to the screen, but can be as complex as an emulated framebuffer outputting to an SCSI port through software rendering or as simple as a simple "move to this address" command. Mostly it's little more than the latter. This is for getting the final image from your memory to your screen.

3D matters because this is how it works in 2D, and thus why your card will work without a driver. Windows works in 2D, hence the name. Left to Windows output subsystems and your GPU will never process any rendering, just as without threading your CPU will never use a second core... Windows simply doesn't know what work to send it, or even that it accepts work. Even 2D rendering must be done this way.

3D is much more complex and proprietary to each card and works via DirectX/OpenGL or directDraw (which I mostly DX now) and to Windows it's a black box. This is why all cards have to internally support these APIs to play games... Windows simply passes the drivers the commands it doesn't understand. Both 3D APIs 'plug' the processing part of your GPU between Windows 2D renderer and Windows' Virtual Display and then bridge the virtual display via drivers containing proprietary commands to the GPU. When I say proprietary btw, I mean VLIW/GCN/Geforce generation specific codes.

This is how your GPU, once it has drivers, is able to render directly to it's own framebuffers and prior to that when in VESA mode, your CPU is rendering and writing to RAM then your BIOS/EFI is passing it from RAM to output.

This is also why your device appears in device manager as one and not 3.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .