1

From my understanding, the Kernel provides the link between software and hardware, and therefore the Kernel must direct system calls being made by OS applications, correct? So would different I/O address maps mean that the Kernel must be programmed differently? Read below as I dont believe I worded that question very accurately.

Let me elaborate, and please correct me if I am wrong as this is how I understood what several articles stated. I will be using the x86 family as the basis for my examples. x86 processors use the INT command as well as an index called the Interrupt Vector Table to map an INT id to the correct location of the desired routine (routine and IVT located in BIOS, correct?). The routines themselves are written such that they can command the hardware specific to a computer system to perform a task based on the protocol of the hardware being used. This allows for the OS to make system calls and communicate with hardware without having any knowledge of the hardware or I/O mapping specific to the system. All that is needed for the OS to communicate with the hardware is the id of the specific ISR desired. Because the kernel is the link between hardware and software, I am guessing that the applications being ran by the OS are not required to even know the ISR id#, they simply tell the kernel they want to for example write data X to the HDD, the kernel relays data X to the correct ISR which then writes the data to the HDD. So two systems, completely identical except they use different ISR id#s for different tasks, would require slightly different kernels?

And would that also mean that boot sector which loads the kernel will also depend on ISR id mapping, as system calls to read from the HDD would need to be made in order to load the kernel?

I apologize if this is in the wrong location, but I read this is the correct location for hardware related questions. Thanks!

6
  • As late as the 1990s, Unix kernels were configured and linked for each machine installation. The trend has been to use either plug-and-play schemes (self-identifying hardware combined with self-configuring software), or data-driven configuration (e.g. Device Tree). You're overly focused on interrupts. Interrupts are only performance enhancers, and not essential to accessing a device. To access a device, you have to know how to access it (i.e. a method), and where that device is (i.e. its device address, which could be I/O port(s) or memory address(es)).
    – sawdust
    Commented Jun 28, 2016 at 22:59
  • @sawdust I just don't understand how a device can be self identifying without some sort of hardware interfacing protocol. If I were to interface an SD card to a Z80 and I was the one wiring it, I could set it up such that an OUT 06h to io address 0002h will bring the CS pin low and initiate a transfer, or I could configure it so that an OUT 03h to io address 0002h will initiate a transfer. Yes the system may assign an SD card an address of 0002h but how would it have any idea of knowing that an OUT 06h or 03h will bring the chip select pin low and initiate a transfer?
    – KeatonB
    Commented Jun 28, 2016 at 23:16
  • @sawdust Unless universal buses are used to connect all devices and the buses are all the same pinout. So say PCIe pins 0-31 are assigned to pins 0-31 of every devices data bus that is connected, regardless of the device connected. If that is indeed how it works, that makes complete sense.
    – KeatonB
    Commented Jun 28, 2016 at 23:30
  • You picked a terrible example. An SDcard is not self-identifying, and it's media rather than a peripheral. The device that the CPU would interface with would be a MMC controller, not the SDcard itself. Successful plug-n-play schemes are those that are use a peripheral bus, such as PCI and USB. The method of identifying peripheral devices attached to the bus is integral to the bus protocol. So yes, there has to be "some sort of hardware interfacing protocol".
    – sawdust
    Commented Jun 28, 2016 at 23:31
  • Your focus on x86 PC hardware is distorting your understanding. Since day one the IBM PC standardized its configuration, and PC clones have followed suit to establish the Wintel standard of computers. ARM processors, which are found in mobile devices and SBCs, do not have any kind of system configuration standardization, and has been plagued by one kernel build for each board/machine variation. Only with the adoption of Device Trees for ARM Linux has Linux distros for ARM boards been possible.
    – sawdust
    Commented Jun 28, 2016 at 23:48

1 Answer 1

1

Regarding the INTh commands you reference (see: BIOS Interrupt Calls), you are correct that this did use to be the way that an OS would access low-level hardware. In a modern machine, these calls (if executed) often end up in the CSM (Compatibiliy Support Module, at least in AMI parlance) which can handle these requests. In the instance of say a video BIOS call, that would execute the code in the video BIOS, if present. I've worked with Intel IGPs as a BIOS developer, and as part of the final image, we had a tool from Intel where we bake in their video BIOS as a blob.

Likewise, the BIOS may implement "emulated" versions of calls to read/set the RTC. A modern OS simply will not execute all of these legacy handlers, since it has no need to lean on the BIOS for that support -- for instance, there may be a kernel driver that knows how to talk to your PCH directly to mess with the RTC settings.

As you can imagine, that is very, very slow and no longer used by modern software. Instead, the OS has the ownership of hardware required to provide an abstraction layer that allows graphical applications to utilize the GPU's drivers to perform these tasks; that device of course is usually PCIe from a SW POV and is memory-mapped.

Likewise, if you look at the Linux storage stack below, you'll see that underlying kernel drives take care of talking to hardware, without using the BIOS -- all the code executed is from your kernel.

enter image description here

Now, regarding different I/O address maps and such, recall that x86 has both I/O address space and memory address space. If you recall Plug-and-Play, on boot up, your BIOS will go through and enumerate the PCI Device tree, which for modern systems, basically encompasses all of your peripherals, at least from a SW POV (i.e. the DRAM controller sits on PCIe Bus 0, your USB controllers are PCI devices from a SW POV, etc.). Using the BARs (Base Address Registers), the BIOS knows how much memory and of what type the target device wants, and it will do its best to accommodate the request.

The final mapping is passed up to the OS at hand-off, and it can choose to respect this, or do its own enumeration phase. Linux, for example, has 'quirks' that you can apply to given PCI Device IDs before the OS has booted, and you may recall kernel boot parameters that can affect how much memory they're allocated, what IRQs they end up with, etc.

1
  • I am actually only trying to understand retro computer systems before I attempt to understand the modern systems. So I was correct on BIOS ISR? For example on a system running Unix from the 70s, if the system call "read" was made, the Kernel would determine routine desired based on the system call table, then jump to and execute the routine (from system call table) which relays the desired address to be read by the application to the correct BIOS routine based on the Interrupt Vector Table? The BIOS would then return the data read to the kernel which would relay the data read to the app?
    – KeatonB
    Commented Jun 28, 2016 at 22:56

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .