173

Everyone seems to say named pipes are faster than sockets IPC. How much faster are they? I would prefer to use sockets because they can do two-way communication and are very flexible but will choose speed over flexibility if it is by considerable amount.

3
  • 11
    Your mileage will vary. :) Profile typical use for your intended application, and pick the better of the two. Then profile anonymous pipes, sockets of other domains and families, semaphores and shared memory or message queues (SysV and POSIX), realtime signals with a word of data, or whatever. pipe(2) (er, mkfifo(3)?) may be the winner, but you won't know until you try.
    – pilcrow
    Commented Aug 6, 2009 at 1:31
  • 2
    SysV message queues FTW! I have no idea if they're fast, i just have a soft spot for them. Commented Sep 21, 2010 at 17:44
  • 6
    What is "speed" in this case? Overall data transfer rate? Or latency (how quickly the first byte gets to the receiver)? If you want fast local data transfer, then it's hard to beat shared memory. If latency is an issue, though, then the question gets more interesting... Commented Mar 30, 2016 at 20:28

12 Answers 12

129

Best results you'll get with Shared Memory solution.

Named pipes are only 16% better than TCP sockets.

Results are get with IPC benchmarking:

  • System: Linux (Linux ubuntu 4.4.0 x86_64 i7-6700K 4.00GHz)
  • Message: 128 bytes
  • Messages count: 1000000

Pipe benchmark:

Message size:       128
Message count:      1000000
Total duration:     27367.454 ms
Average duration:   27.319 us
Minimum duration:   5.888 us
Maximum duration:   15763.712 us
Standard deviation: 26.664 us
Message rate:       36539 msg/s

FIFOs (named pipes) benchmark:

Message size:       128
Message count:      1000000
Total duration:     38100.093 ms
Average duration:   38.025 us
Minimum duration:   6.656 us
Maximum duration:   27415.040 us
Standard deviation: 91.614 us
Message rate:       26246 msg/s

Message Queue benchmark:

Message size:       128
Message count:      1000000
Total duration:     14723.159 ms
Average duration:   14.675 us
Minimum duration:   3.840 us
Maximum duration:   17437.184 us
Standard deviation: 53.615 us
Message rate:       67920 msg/s

Shared Memory benchmark:

Message size:       128
Message count:      1000000
Total duration:     261.650 ms
Average duration:   0.238 us
Minimum duration:   0.000 us
Maximum duration:   10092.032 us
Standard deviation: 22.095 us
Message rate:       3821893 msg/s

TCP sockets benchmark:

Message size:       128
Message count:      1000000
Total duration:     44477.257 ms
Average duration:   44.391 us
Minimum duration:   11.520 us
Maximum duration:   15863.296 us
Standard deviation: 44.905 us
Message rate:       22483 msg/s

Unix domain sockets benchmark:

Message size:       128
Message count:      1000000
Total duration:     24579.846 ms
Average duration:   24.531 us
Minimum duration:   2.560 us
Maximum duration:   15932.928 us
Standard deviation: 37.854 us
Message rate:       40683 msg/s

ZeroMQ benchmark:

Message size:       128
Message count:      1000000
Total duration:     64872.327 ms
Average duration:   64.808 us
Minimum duration:   23.552 us
Maximum duration:   16443.392 us
Standard deviation: 133.483 us
Message rate:       15414 msg/s
7
  • 1
    Thanks for the detailed benchmarking. Do you mean "multiprocessing.Queue" with "Message Queue"?
    – ovunccetin
    Commented Nov 7, 2019 at 14:29
  • 1
    Message Queue is a system XSI message queue (man7.org/linux/man-pages/man0/sys_msg.h.0p.html)
    – chronoxor
    Commented Nov 13, 2019 at 13:33
  • 3
    "only 16 %" :-) 16% is huge if you have a million servers and you are the one paying the electricity bill. Also, 128 bytes is unrealistically small. Commented May 3, 2021 at 19:07
  • 2
    how much would it be compared named pipe to simple process start & argument pass? Commented Jul 22, 2021 at 1:39
  • @AxelRietschin, 128 bytes is NOT "unrealistically small". I'm currently looking for solutions for sending tens/hundreds of thousands "randomly-sized" messages per second between some 20+ threads, with most messages being only 14 bytes.
    – binaryLV
    Commented Jun 7, 2023 at 6:55
81

I would suggest you take the easy path first, carefully isolating the IPC mechanism so that you can change from socket to pipe, but I would definitely go with socket first. You should be sure IPC performance is a problem before preemptively optimizing.

And if you get in trouble because of IPC speed, I think you should consider switching to shared memory rather than going to pipe.

If you want to do some transfer speed testing, you should try socat, which is a very versatile program that allows you to create almost any kind of tunnel.

3
  • "You should be sure IPC performance is a problem before preemptively optimizing." Could you please explain that in more detail?
    – John
    Commented Jan 24, 2022 at 7:29
  • 1
    If an API is more convenient for you, because it allows you to write clear code or less code, then you should use it first. Once you have a working program, with a realistic data usage, then you can evaluate the performance of your program. By evaluating it, tracing it, you can get information on where the bottleneck is. If your bottleneck is IPC speed, then you can switch to a more complicated but faster API. Given a tradeoff between speed and readability, you should pick readability first, then measure. If IPC speed is still an issue, then you can make an informed choice.
    – shodanex
    Commented Jan 25, 2022 at 8:41
  • @john, also see Tim Post answer
    – shodanex
    Commented Jan 25, 2022 at 8:42
36

I'm going to agree with shodanex, it looks like you're prematurely trying to optimize something that isn't yet problematic. Unless you know sockets are going to be a bottleneck, I'd just use them.

A lot of people who swear by named pipes find a little savings (depending on how well everything else is written), but end up with code that spends more time blocking for an IPC reply than it does doing useful work. Sure, non-blocking schemes help this, but those can be tricky. Spending years bringing old code into the modern age, I can say, the speedup is almost nil in the majority of cases I've seen.

If you really think that sockets are going to slow you down, then go out of the gate using shared memory with careful attention to how you use locks. Again, in all actuality, you might find a small speedup, but notice that you're wasting a portion of it waiting on mutual exclusion locks. I'm not going to advocate a trip to futex hell (well, not quite hell anymore in 2015, depending upon your experience).

Pound for pound, sockets are (almost) always the best way to go for user space IPC under a monolithic kernel .. and (usually) the easiest to debug and maintain.

1
  • 4
    maybe some day in a distant utopian future we'll have a completely new, modular, modern kernel that implicitly offers all the (interprocess and others) abilities we currently walk over broken glass to accomplish... but hey.. one can dream
    – Gukki5
    Commented Aug 21, 2018 at 19:12
31

Keep in mind that sockets does not necessarily mean IP (and TCP or UDP). You can also use UNIX sockets (PF_UNIX), which offer a noticeable performance improvement over connecting to 127.0.0.1

3
  • 1
    What about Windows?
    – Pacerier
    Commented Feb 19, 2017 at 21:10
  • 1
    @Pacerier Sadly, you can't create local sockets on Windows in the same way as the abstract namespace on UNIX. I have found PF_UNIX sockets to be substantially faster (>10%) than most other methods described on this page. Commented Apr 14, 2017 at 15:04
  • 3
    devblogs.microsoft.com/commandline/af_unix-comes-to-windows update, Unix sockets are available in Windows 10 now.
    – eri0o
    Commented Feb 19, 2020 at 20:18
29

As often, numbers says more than feeling, here are some data: Pipe vs Unix Socket Performance (opendmx.net).

This benchmark shows a difference of about 12 to 15% faster speed for pipes.

18

you can find a runnable bench here https://github.com/goldsborough/ipc-bench
enter image description here

Regards

11

If you do not need speed, sockets are the easiest way to go!

If what you are looking at is speed, the fastest solution is shared Memory, not named pipes.

9

For two way communication with named pipes:

  • If you have few processes, you can open two pipes for two directions (processA2ProcessB and processB2ProcessA)
  • If you have many processes, you can open in and out pipes for every process (processAin, processAout, processBin, processBout, processCin, processCout etc)
  • Or you can go hybrid as always :)

Named pipes are quite easy to implement.

E.g. I implemented a project in C with named pipes, thanks to standart file input-output based communication (fopen, fprintf, fscanf ...) it was so easy and clean (if that is also a consideration).

I even coded them with java (I was serializing and sending objects over them!)

Named pipes has one disadvantage:

  • they do not scale on multiple computers like sockets since they rely on filesystem (assuming shared filesystem is not an option)
9

One problem with sockets is that they do not have a way to flush the buffer. There is something called the Nagle algorithm which collects all data and flushes it after 40ms. So if it is responsiveness and not bandwidth you might be better off with a pipe.

You can disable the Nagle with the socket option TCP_NODELAY but then the reading end will never receive two short messages in one single read call.

So test it, i ended up with none of this and implemented memory mapped based queues with pthread mutex and semaphore in shared memory, avoiding a lot of kernel system calls (but today they aren't very slow anymore).

1
  • 3
    "So test it" <-- words to live by.
    – Koshinae
    Commented Apr 18, 2016 at 9:03
9

I know this is a super old thread but it's an important one so I'd like to add my $0.02. UDS are much faster in concept for local IPC. Not only are they faster but if your memory controller supports DMA then UDS causes almost no load on your CPU. The DMA controller will just offload memory operations for the CPU. TCP needs to be packetized into chunks of size MTU and if you don't have a smart nic or TCP offload somewhere in specialized hardware that causes quite a bit of load on the CPU. In my experiences UDS are around 5x faster on modern systems in both latency and throughput.

These benchmarks come from this simple benchmark code. Try for yourself. It also supports UDS, pipes, and TCP: https://github.com/rigtorp/ipc-bench

Local benchmarks for me

I see a CPU core struggling to keep up with TCP mode while sitting at about ~15% load under UDS thanks to DMA. Note that Remote DMA or RDMA gains the same advantages in a network.

8

Named pipes and sockets are not functionally equivalent; sockets provide more features (they are bidirectional, for a start).

We cannot tell you which will perform better, but I strongly suspect it doesn't matter.

Unix domain sockets will do pretty much what tcp sockets will, but only on the local machine and with (perhaps a bit) lower overhead.

If a Unix socket isn't fast enough and you're transferring a lot of data, consider using shared memory between your client and server (which is a LOT more complicated to set up).

Unix and NT both have "Named pipes" but they are totally different in feature set.

2
  • 3
    Well if you open 2 pipes, then you get bidi behavior too.
    – Pacerier
    Commented Feb 19, 2017 at 21:13
  • Absolutely dumb question but what is "shared memory"? I've messed around with shm_open but I need to notify a process that I have incoming work it needs to do - which I don't think that does. Is shared memory the same thing as a memory mapped file?
    – David Alsh
    Commented Feb 15 at 8:06
4

You can use lightweight solution like ZeroMQ [ zmq/0mq ]. It is very easy to use and dramatically faster then sockets.

4
  • 2
    You might like, guess Amit, Martin SUSTRIK's next artwork -- POSIX compliant nanomsg. Anyway, welcome & enjoy this great place & become it's actively Contributing Member. Commented Apr 2, 2017 at 6:06
  • 1
    I do like "nanomsg" - really great lib. Thank you.
    – Amit Vujic
    Commented Feb 20, 2023 at 23:04
  • It's easy to use, but you cannot assure it's "dramatically faster then sockets", everyone need to benchmark for themselves on their CPU/setup to know this. Especially that looking at 2 example benchmarks posted in other answers, both Unix sockets and TCP sockets perform better than ZMQ. Commented Jun 1, 2023 at 11:43
  • It looks to me that benchmarks are posted two years after my answer. I will update myself; thank you for your support.
    – Amit Vujic
    Commented Jun 10, 2023 at 11:37

Not the answer you're looking for? Browse other questions tagged or ask your own question.