1

I understand that if a system consists of multiple hardware-threads the scheduler assigns software-threads to hardware-threads.

However, hypothetically, let's imagine a system that does only consist of a single-hardware-thread. Is the execution of multiple software-threads forbidden or does the program execute sequentially?

3
  • 2
    Pre-emptive multi-tasking was common long before hardware multi-processing was common. E.g. Windows 95 introduced such a scheduler, and it was the standard approach in Unix since forever. A pre-emptive scheduler occasionally suspends the running task and lets another thread execute for the next timeslice. This just requires a hardware timer or interrupt that causes the scheduler to run when the timeslice is over.
    – amon
    Commented Feb 1, 2021 at 13:00
  • A system does not consist of hardware or software threads, that whole statement is misleading. An (operating) system may provide multiple threads to a process, and there is usually hardware support like interrupts used, even if the CPU has only a single core. So please clarify what you mean by "single hardware-threads" - only single core CPUs? Or a CPU with no interrupts? Or something else?
    – Doc Brown
    Commented Feb 1, 2021 at 20:42

3 Answers 3

2

This is far from hypothetical:

  • Before multithreading CPUs, running multiple threads and processes on a single threaded CPU was a common practice, supported by many OSes.
  • There a still a lot of microcontrollers around that work with a single single threaded core. So it’s still a relevant question.

The way multiple threads are run on a single CPU execution thread depends a lot on the OS / execution environment / library that you are using and the underlying threading principle:

  • preemptive multithreading is done in a similar way than multiprocessing, but it is much lighter and faster: the execution of threads is performed in small slices that are each executed sequentially. The fact that there is a frequent switch creates the illusion of concurrency at the cost of performance.
  • cooperative multithreading let each thread decide when the switch to another thread is to be done. In the worst case, two threads may just be executed one after the other in a sequential way. The impression of concurrency is less convincing, but performance is be better (less switching overhead).
  • usually I/O operations are associated with some kind of waiting. I/O calls therefore often lead - in both models - to a potential thread switch. Since the I/O waiting time is an order of magnitude longer (milliseconds) than thread switching (nanoseconds), this kind of switch has little impact on perceived performance but significantly increases system throughput in I/O intensive applications.

More information:

1

First consider that if a single core system allowed only one thread, then it would be logical that a processor with 16 cores would allow only sixteen threads - my Mac runs a few hundred right now.

What happens is that you can have as many threads as you want (within reason). Typically each core starts running one thread, and either when the thread needs to wait for something or enough time has passed, the thread will be paused and another starts running.

4
  • To slightly extend this to what OP presumably wants to know: it's the machine's (not the application's) prerogative to decide which thread to run at which time on which core. So the machine might interleave your two threads on its single core, or it could choose to do one after the other.
    – Flater
    Commented Feb 1, 2021 at 13:46
  • Worth mention that threads can be set with different levels of priority. No need to wait for a thread to stop, threads with higher priority will be scheduled first, what could break any "feeling" of sequentiality
    – Laiv
    Commented Feb 1, 2021 at 13:57
  • @Flater Actually it is the operating system's prerogative. (Those are not the same thing) Commented Feb 2, 2021 at 10:40
  • @user253751: From the perspective we're discussing, "machine" is sufficiently descriptive (and commonly used). You're taking it too literal. It's not referring to actual physical hardware, it's referring to whatever environment on which this application will run. Whether that's an actual physical hardware device, which OS it is, whether it's a VM, ... is irrelevant for this topic.
    – Flater
    Commented Feb 2, 2021 at 10:42
0

Let me now try to explain:

So far as I am aware (and I've been around these parts a long time), there is no such thing as a "hardware," nor a "software," thread. "Threading" is a purely-software concept – agnostic to whatever-is the underlying hardware that runs it.

The software concept is that "the computer's workload" consists of one-or-more independent "processes," each of which owns such resources as memory-segments and file-handles. Then, within each process, we have one-or-more independent "threads," all of which share the owning process's resources.

The hardware concept, then, is that: "the operating system must now find a way to run it, on whatever hardware it finds that it has." If there is only one CPU/core, then quite-necessarily only one [thread of one ...] process can execute at a time; otherwise they will indeed be "physically simultaneous."

The programmer concept therefore must be that: "it is entirely unpredictable." You must never write software that is dependent upon an operating-system's decisions: "indeed, exactly the opposite."

Not the answer you're looking for? Browse other questions tagged or ask your own question.