23

From what books I read on linux system programming, it seems like signals were the primary way to communicate events between processes. They were the gateway into many interesting functionalities, like timers, interrupting sleeping threads, IO events and so forth.

When reading books on multithreading and latency control, I do not remember seeing signals. I believe signals have higher privileges due to being able to interrupt sleeping thread, which I believe is a good thing when it sleeps for too long (I know there are also semaphores and condition variables, but signals seem to be the most universal way to do that) aside from other functionality provided by the kernel.

So my question is: why did usage of signals disappear? Is it because higher level, inside-VM languages took over? Or were there any innovations that made them obsolete? I've never seen stuff like system timers in C++ libraries before, so I'm doubtful that anything better was invented.

6
  • 2
    This may well be a better fit on Unix & Linux but I've had a crack at an answer anyway. Commented Nov 3, 2021 at 14:18
  • 3
    I've never come across signals being used for inter process communications. They are sent to a process by the operating system. You might use the kill command to ask the OS to send a SIGKILL to a process, but it doesn't really come from the kill process.
    – David Arno
    Commented Nov 3, 2021 at 14:23
  • 1
    @DavidArno, perhaps I didn't write it up properly, what I meant was to communicate events, not communications in general. Commented Nov 3, 2021 at 14:23
  • I guess I didn't understand the original context of usage properly, is there any book on when, why and how signals were used and, perhaps, used today? Commented Nov 3, 2021 at 14:28
  • 11
    They didn't go anywhere; they're still alive and well in modern operating systems. The premise of this question is therefore flawed. Commented Nov 3, 2021 at 23:19

5 Answers 5

51

it seems like signals were the primary way to communicate between processes

I'd disagree with this. Signals are/were the primary way for a "supervisor" process to control a "supervised" project - e.g. init wanting to stop a process at system shutdown, a shell wanting to notify a subprocess of something. They were never really the primary way for cooperating processes to communicate - a shell pipeline communicates via pipes, not via signals. In particular here, signals convey essentially no information beyond their type - if you actually wanted to communicate a non-trivial amount of information, you needed a separate method to actually pass that information, whether that be a pipe, shared memory, a temporary file or whatever else.

9
  • 14
    I think you're mixing up various concepts here; Unix and Unix-like OSes have always had preemptive multithreading - what the Linux RT patch did was a relatively small (but complicated!) tweak to things in the kernel and really nothing to do with user space apps. Commented Nov 3, 2021 at 14:43
  • 8
    @Incomputable, en.wikipedia.org/wiki/Signal_(IPC) has quite a good write up on the problems with using signals. I think "they were not really popular in the first place" is pretty much the right answer.
    – David Arno
    Commented Nov 3, 2021 at 14:46
  • 2
    @Incomputable That's a very different question. (And one which seems fairly obvious to someone who used desktop machines with co-operative multitasking: stability. A busy program would often cause GUI slow-downs and freezes, and a dodgy program would sometimes lock up the entire system and need a reboot. Pre-emptive multitasking made the system much smoother and more responsive, and far less likely to lock up. — Memory protection helped a lot with that last point too, of course.)
    – gidds
    Commented Nov 3, 2021 at 23:35
  • 2
    @Incomputable : If your question is "why there was a shift from cooperative threading to preemptive threading", why did you not include any of that in your Question? Commented Nov 4, 2021 at 14:57
  • 3
    @EricTowers, I wanted to get an answer to this question first. My colleague has been insisting on usage of signals for multithreaded communications. I knew it was a bad idea. Also, I thought that this question is more answerable than the shift question, and given that I had no idea about standards on SoftwareEngineering.SE, I wanted to dip my toes first with this question. I will ask the shift question separately when I will improve my understanding of what is going on with RTOS. Commented Nov 4, 2021 at 15:29
32

Signals haven't gone anywhere. They do about as much now as they did in the 1970s. (A little more, but not much more.)

Signals were, and are, a crude way of letting a process know that something happened. When a process reacts to a signal, that signals usually either means “go away” (the primary intent of signals, which is why the system call to send a signal is called kill) or “wake up”. There are only a couple dozen distinct signals, and they don't carry any associated payload, and don't even identify who sent them. While all signals are guaranteed to be delivered, the kernel can conflate identical signals (i.e. if the same signal is sent twice to the same process before the receiving process reacts, it might only see the signal once). (Some of this is not systematically true on modern systems.) Because signals have no payload, they cannot sensibly be used alone for interprocess communication.

Unix originally did not offer shared memory between processes. The primary way for processes to communicate was pipes, which allow a process to send data to another in the form of a byte stream (bidirectional communication requires a pair of pipes). Later more sophisticated mechanisms appeared, in particular sockets which are bidirectional and can preserve message boundaries. Later, Unix systems acquired multithreading inside the same memory space, which allows more performance at the expense of making it a lot harder to avoid race conditions.

Signals can interrupt a sleeping thread. They can also interrupt an active thread at any time. Because a process doesn't control when it receives signals, signal handling is hard: a signal handler has to be careful not to disrupt whatever the process was doing. Sensible concurrent programs avoid this kind of preemption as much as possible, and instead have specific points of synchronization where a thread checks for incoming messages or changes in shared memory.

1
  • 1
    Signals can (sometimes) identify the sender, and (sometimes) carry a (small) payload. See man sigaction and man sigqueue for details.
    – psmears
    Commented Nov 4, 2021 at 11:26
15

Signals were always a rather quirky. The mechanism is very simple, which is why it was created in the first place, but because the signal handler can interrupt the process in literally any point, what you can do inside the handler is rather limited: you can't allocate memory (the signal might have just interrupted the allocation function), you can't lock things (the interrupted code might be holding the lock and it can't run to unlock it) etc. You can basically just set volatile variables and call a limited set of system calls.

When you have single-threaded program in C, you have enough control to stay within those limitations. But

  • With addition of threads the lack of synchronization, and poor control over which thread will handle the signal, makes everything a lot harder.
  • Higher-level languages do a lot of things under the hood that don't fit within the limitations, so what you can do in a signal handler is even more limited there.

So multi-threaded or event-driven programs, if they need to handle signals, tend to just create a pipe and send themselves the signal number over it, because it's about the only thing they can reliably do in the signal handler.

The signalfd(2) system call was created on Linux to do basically the same thing with less setup, but it's not portable so the handler and pipe tend to be more common.

And if there is already a pipe and a select or poll loop, usually a method of sending requests over a pipe or socket is preferred over sending signals, because it can pass more complex messages, and signals are only handled if some cleanup on shutdown is desired for SIGTERM and SIGINT.

2
  • "quirky" - not to mention damned dangerous due to no diagnostics or other warnings at all when you write code that violates any of the restrictions you mention in your first paragraph - including something you didn't emphasize which is the very limited whitelist of system calls you're allowed to make, notably excluding everything from the stdio library!
    – davidbak
    Commented Nov 6, 2021 at 21:48
  • @davidbak, indeed. Well, stdio library is not system calls, but somewhat non-trivial wrapper around them.
    – Jan Hudec
    Commented Nov 8, 2021 at 8:01
-1

Long, long, ago you could stop a running process in Unix by simply typing CTRL C. That would send a signal such as KILL to the process and it would stop because Unix would not let the process capture and otherwise handle that signal. If you were using MS-DOS typing CTRL C would also stop a running process but not until the process received the character through its input stream. ie the MS_DOS shell, such as it was, could not generate a signal. The result was lots of machine restarts to cope with rogue processes. Signals matter. How you deal with them matters too.

2
  • ctrl-c is still a signal. It's not KILL but SIGINT. By default, the process will die, but it can be caught and handled.
    – user10489
    Commented Nov 9, 2021 at 1:03
  • Correct. Thank you. It's been a decade or so since I worked every day on signals. Commented Nov 10, 2021 at 4:14
-2

I would suggest any program written in any programming language that intends to run on UNIX or UNIX-like systems that doesn't yet handle signals; is an incomplete program akin to an ammeter software

Any production grade software, that you want to run and control reliably, would need to appropriately react to signals or it couldn't possibly be considered reliable and certainly wouldn't be called stable if it isn't able to be controlled by signals and act in an expected manner

why did usage of signals disappear?

Im most software ive seen as a consultant, nowhere. Signals are everywhere. Even in high level programming languages as you suggest they're not needed, they're needed as much in these than in c. Golang and rust are very new and golang is high level language that makes ubiquitous use of signals as rust, both implement signal handling in popular projects if you go look through github.

Signals are missing in a lot of hibbyist and ammetuer code bases, this doesn't make the purpose and maturity of signals 'dead' by any means. If anything the absence of signals is an indication of incomplete fundamental programming knowledge and if anything is dying it's the death of competent UNIX and UNIX-like software

Not the answer you're looking for? Browse other questions tagged or ask your own question.