2
\$\begingroup\$

The modern microprocessors I've dealt with could have 2 modes: User and superuser (and sometimes this difference was just in the manual and not actually implemented like with the Nios II which states that it has 2 modes but only implements 1). Therefore I wonder if this is true in general about microprocessors i.e. it is not very advantageous to add more modes than 2 for instance a third mode that could be "supersuperuser" (which in practice could be that a "supersuperuser" could change the privileges of "superusers") if there could be need for 3 modes? And is it this mode difference of the CPU in modern systems that has caused the design difference between operating systems where some operating systems are called "microkernels" because of their way to load device driver programs in user mode instead of in superuser mode, which is the way of a monolithic os kernel? What is the history about the 2 CPU modes getting developed? It says in another SE comment:

I saw the names "user mode" and "supervisor mode" in ARM2 first (late 1980's),

So the early microprocessors like Intel 8080 and Zilog Z80 didn't have modes so any program that was run could run any instruction?

https://superuser.com/questions/634733/why-so-many-modes-are-in-cpu

Are these 2 modes usually implemented in hardware and if so, what does the implementation look like? What is it that changes when a microprocessor switches modes between user and superuser?

\$\endgroup\$
1
  • \$\begingroup\$ While they maybe weren't microprocessors in the usual sense the general idea dates back further, I remember DEC VAX systems (from the 70's) had four levels. They were kernel, executive, supervisor and user. \$\endgroup\$
    – PeterJ
    Commented Feb 15, 2014 at 11:23

4 Answers 4

5
\$\begingroup\$

Rather a lot of questions in this question, but I can address some of them.

Intel have four modes known as "rings", but two are frequently unused.

Intel 8080 and Zilog Z80 didn't have modes so any program that was run could run any instruction?

Correct. The privilege separation technique had been invented, but the complexity cost of adding it to microprocessors was large and those processors were used in single-user, single-process systems with no networking. Security simply wasn't a consideration.

What it "looks like" depends on where you're looking from. The programmer's guide for ARM shows the programmer's view for that architecture. The electrical implementation may vary - whether the registers are physically swapped out or just renamed.

It's necessarily implemented in hardware so that it cannot be circumvented.

\$\endgroup\$
9
\$\begingroup\$

The history of this started with timesharing, and the purpose was to isolate user processes from each other.

When there is more than one independent user on a machine, it is to each user's advantage to take as much of the resources of the machine as possible while leaving the others with nothing. Without some kind of hardware protection, once a user got the CPU he could keep it forever.

This was dealt with by having different modes, sometimes called rings or privelege levels. User processes run at the lowest privelege and can not physically take over the machine. This works in conjunction with protections for areas of memory also based on the privilege level.

In the most basic form, you only need two privilege levels: user and OS. Historically there have been machines that had more. Four seemed to be a popular number for a while. However, operating systems rarely made much use of more than two privelege levels.

Microcontrollers and early microporocessors don't have privelege levels because they are not intended for applications with hostile competing processes.

\$\endgroup\$
2
  • \$\begingroup\$ Every body should know this fundamental, otherwise it is a swimmer in the middle of ocean \$\endgroup\$
    – GR Tech
    Commented Feb 15, 2014 at 12:44
  • \$\begingroup\$ Virtualization and "secure software" encourage adding privilege modes. \$\endgroup\$
    – user15426
    Commented Feb 21, 2014 at 2:37
1
\$\begingroup\$

Almost everything that can be done by having a CPU distinguish supervisor/user mode distinctions can be accomplished with a memory management unit which is external to the CPU and controlled as an I/O device, but there are a few caveats:

-1- In general, it is not desirable for user-mode programs to have the ability to disable interrupts, but if user code performs a "disable interrupts" instruction it may be difficult for an external MMU to do anything about it. An MMU could snoop the bus and trigger an NMI if it sees user-mode code fetching a "disable interrupts" instruction, but it would be easier to have the CPU block the instruction itself.

-2- Interrupts need to be able to do things which user-mode can't, which implies that when an interrupt is taken the MMU needs to switch out of "restricted user" mode. A unit which watches what's going on may be able to do this, but it's more easily handled in the CPU.

-3- User mode shouldn't be able to corrupt the stack used by interrupts. It's possible for interrupt handlers to take care of this even when using an external MMU with a processor that knows nothing about user/supervisor modes, but it's a lot more efficient to have a processor which can switch stack pointers internally.

While there can be advantages to having more modes than just user/supervisor, it's often not worth the added complexity. The hardware for a two-level system is simpler than the hardware necessary to work around its absence, but adding more levels complicates hardware rather than simplifying it. Further, having software emulate more levels on top of a two-level hardware system may be easier than performing such emulation when using multiple hardware levels.

PS--while it's bad for user-mode tasks to be able to disable interrupts for arbitrary periods, I've often wished for processors to include a "temporary interrupt disable" counter, and include an instruction which would disable interrupts temporarily (e.g. for the next 8 instructions or so); if the instruction was executed again within the next 8 instructions and an interrupt had become pending, the interrupt would be immediately, and on return the instruction would execute again. Such a feature would greatly ease the writing of interrupt-safe data structures on single-processor machines.

\$\endgroup\$
5
  • \$\begingroup\$ If unprivileged software can change the MMU without restriction, protection is not guaranteed. If unprivileged software can gain the permission to modify the interrupt handler code (assuming it is not true ROM), it does not need to disable interrupts to hijack the system. \$\endgroup\$
    – user15426
    Commented Feb 21, 2014 at 2:35
  • \$\begingroup\$ @PaulA.Clayton: The MMU needs to know whether running code is allowed to reconfigure the MMU, but otherwise most other issues can be controlled by simply changing memory and/or I/O mappings. If a user-mode task doesn't have the interrupt vectors or handlers in its memory map, there won't be any risk of it monkeying with them. \$\endgroup\$
    – supercat
    Commented Feb 21, 2014 at 16:35
  • \$\begingroup\$ Without a privileged mode, the MMU would need to use some other information to determine privilege (which effectively becomes a privileged mode). Secrecy (like address randomization) reduces risk but does not eliminate risk. As the answer notes, workarounds are more complex (with greater risk of accidentally leaving a security hole) than providing a privileged mode. \$\endgroup\$
    – user15426
    Commented Feb 21, 2014 at 19:02
  • \$\begingroup\$ @PaulA.Clayton: My main point is that provided the MMU has certain necessary abilities, the CPU doesn't need to have any concept of user/supervisor modes. A CPU which doesn't understand such things mode may not know when code should or should not be allowed to do certain things, but that won't matter if the MMU can block forbidden actions and trigger an NMI. Adding a small amount of user/supervisor-mode logic in the CPU can eliminate the need for a larger amount of MMU logic to monitor code execution and interrupts, but such CPU logic isn't essential for security. \$\endgroup\$
    – supercat
    Commented Feb 21, 2014 at 19:55
  • \$\begingroup\$ @PaulA.Clayton: My secondary point was that rather than having various levels of privilege which vary in what they're allowed to do, it's often more helpful to say that any code which can't reconfigure the MMU is "supervisor", and any code which can't is allowed to do whatever is permitted by the things that are mapped into its address space. The question of whether a piece of code can access a device should be controlled by whether the device is mapped anywhere in its address space, rather than by whether it has a sufficient privilege level. \$\endgroup\$
    – supercat
    Commented Feb 21, 2014 at 20:00
-2
\$\begingroup\$

Microprocessors are simply and no more than a collection of binary switches. There can be paths pre-connected among the switches that work similar to computer programs, but these connected paths do not change the fact that microprocessor are only a collection of binary switches.

The "2 modes: User and superuser" that you spoke of are either hard-connections on the cpu or connections elsewhere.

Having said that; If these pre-connections exist on a microprocessor then they can dictate how commands from programs (being simply and no more than binary) are run through the collection of binary switches. For example: in superuser mode more paths might be allowed to be completed than in user mode. For this to happen, a manufacturer of the microprocessor might have some blocks to the resulting binary logic circuit's result return if the binary switch that allows superuser to be ongoing is turned off (in user mode). Another way of looking at this is that the default from the manufacturer could be superuser, whereas the operating system instigates a block switch to be turned on (for user mode) by default until the operating system is told to not block superuser logic returns.

\$\endgroup\$
1
  • \$\begingroup\$ System/Kernel code is protected by trust certificates. To upgrade the OS and protected resources you must supply a certificate or any change will be blocked. \$\endgroup\$
    – Sparky256
    Commented Mar 17 at 0:32

Not the answer you're looking for? Browse other questions tagged or ask your own question.