Skip to main content
The 2024 Developer Survey results are live! See the results
added 119 characters in body
Source Link
shuyan
  • 109
  • 1
  • 6

For local processes communication pipes are definitely faster than sockets. Here is a benchmark or cached here.

SYSV IPC vs. UNIX pipes vs. UNIX sockets Blockquote

BlockquoteLatency test

Latency test Samples: 1 million

Method Average Average latency (us) 

SYSV IPC msgsnd/rcv 7 7.0 

UNIX pipe 5 5.9 

UNIX sockets 11 sockets 11.4 

Bandwidth test 

Samples: 1 million 

Data size: 1 kB 

Block size size: 1 kB

Method Average Average bandwidth (MB/s) 

SYSV IPC msgsnd/rcv 108 108

UNIX pipe 142 UNIX 142

UNIX sockets 95 95

Notes 

msgsnd/rcv have a maximum block size: on my system it’s about 4kB. Performance increases as block size is raised towards the ceiling. The highest bandwidth I could achieve was 284 MB/s, with a block size of 4000 bytes and a data size of 2MB. Performance dropped off slightly as the data size was decreased, with 4kB of data giving a bandwidth of 266 MB/s.

I don’t know what block size my system uses internally when transferring data through a pipe, but it seems a lot higher than 4kB. Using a block size of 32kB, I could achieve over 500 MB/s. I tested this with various data sizes from 32kB to 32MB and each time achieved 400-550 MB/s. Performance tailed off as the data and block sizes were decreased, similarly as the block size was raised.

Unix socket performance is much better with a higher block size than 1kB. I got best results (134 MB/s) with 2kB blocks, 4kB data size. This is comparable with UNIX pipes.

I’m not sure if my testing methods are perfect. Bandwidth testing seems fairly straightforward, but I kind of guessed at how to test latency. I just sent 1 character back and forth between two processes living at either end of a fork().

One factor I didn’t test is the time taken to bind() a UNIX socket and connect() to the server. If you keep connections open, it’s obviously not significant.

Conclusion 

On my system, UNIX pipes give higher bandwidth and lower latency than SYSV IPC msgsnd/rcv and UNIX sockets, but the advantage depends on the block size used. If you are sending small amounts of data, you probably don’t need to worry about speed, just pick the implementation that suits you. If you want to shift a huge amount of data, use pipes with a 32kB block size.

System information CPU

CPU : Intel Celeron III (Coppermine) 

RAM
   : 256MB 

Kernel : Linux 2.2.18

I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.

https://web.archive.org/web/20160401124744/https://sites.google.com/site/rikkus/sysv-ipc-vs-unix-pipes-vs-unix-sockets

For local processes communication pipes are definitely faster than sockets. Here is a benchmark or cached here.

SYSV IPC vs. UNIX pipes vs. UNIX sockets Blockquote

Blockquote

Latency test Samples: 1 million

Method Average latency (us) SYSV IPC msgsnd/rcv 7.0 UNIX pipe 5.9 UNIX sockets 11.4 Bandwidth test Samples: 1 million Data size: 1 kB Block size: 1 kB

Method Average bandwidth (MB/s) SYSV IPC msgsnd/rcv 108 UNIX pipe 142 UNIX sockets 95 Notes msgsnd/rcv have a maximum block size: on my system it’s about 4kB. Performance increases as block size is raised towards the ceiling. The highest bandwidth I could achieve was 284 MB/s, with a block size of 4000 bytes and a data size of 2MB. Performance dropped off slightly as the data size was decreased, with 4kB of data giving a bandwidth of 266 MB/s.

I don’t know what block size my system uses internally when transferring data through a pipe, but it seems a lot higher than 4kB. Using a block size of 32kB, I could achieve over 500 MB/s. I tested this with various data sizes from 32kB to 32MB and each time achieved 400-550 MB/s. Performance tailed off as the data and block sizes were decreased, similarly as the block size was raised.

Unix socket performance is much better with a higher block size than 1kB. I got best results (134 MB/s) with 2kB blocks, 4kB data size. This is comparable with UNIX pipes.

I’m not sure if my testing methods are perfect. Bandwidth testing seems fairly straightforward, but I kind of guessed at how to test latency. I just sent 1 character back and forth between two processes living at either end of a fork().

One factor I didn’t test is the time taken to bind() a UNIX socket and connect() to the server. If you keep connections open, it’s obviously not significant.

Conclusion On my system, UNIX pipes give higher bandwidth and lower latency than SYSV IPC msgsnd/rcv and UNIX sockets, but the advantage depends on the block size used. If you are sending small amounts of data, you probably don’t need to worry about speed, just pick the implementation that suits you. If you want to shift a huge amount of data, use pipes with a 32kB block size.

System information CPU : Intel Celeron III (Coppermine) RAM
 : 256MB Kernel : Linux 2.2.18

I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.

https://web.archive.org/web/20160401124744/https://sites.google.com/site/rikkus/sysv-ipc-vs-unix-pipes-vs-unix-sockets

For local processes communication pipes are definitely faster than sockets. Here is a benchmark or cached here.

SYSV IPC vs. UNIX pipes vs. UNIX sockets

Latency test

Samples: 1 million

Method Average latency (us) 

SYSV IPC msgsnd/rcv 7.0 

UNIX pipe 5.9 

UNIX sockets 11.4 

Bandwidth test 

Samples: 1 million 

Data size: 1 kB 

Block size: 1 kB

Method Average bandwidth (MB/s) 

SYSV IPC msgsnd/rcv 108

UNIX pipe 142

UNIX sockets 95

Notes 

msgsnd/rcv have a maximum block size: on my system it’s about 4kB. Performance increases as block size is raised towards the ceiling. The highest bandwidth I could achieve was 284 MB/s, with a block size of 4000 bytes and a data size of 2MB. Performance dropped off slightly as the data size was decreased, with 4kB of data giving a bandwidth of 266 MB/s.

I don’t know what block size my system uses internally when transferring data through a pipe, but it seems a lot higher than 4kB. Using a block size of 32kB, I could achieve over 500 MB/s. I tested this with various data sizes from 32kB to 32MB and each time achieved 400-550 MB/s. Performance tailed off as the data and block sizes were decreased, similarly as the block size was raised.

Unix socket performance is much better with a higher block size than 1kB. I got best results (134 MB/s) with 2kB blocks, 4kB data size. This is comparable with UNIX pipes.

I’m not sure if my testing methods are perfect. Bandwidth testing seems fairly straightforward, but I kind of guessed at how to test latency. I just sent 1 character back and forth between two processes living at either end of a fork().

One factor I didn’t test is the time taken to bind() a UNIX socket and connect() to the server. If you keep connections open, it’s obviously not significant.

Conclusion 

On my system, UNIX pipes give higher bandwidth and lower latency than SYSV IPC msgsnd/rcv and UNIX sockets, but the advantage depends on the block size used. If you are sending small amounts of data, you probably don’t need to worry about speed, just pick the implementation that suits you. If you want to shift a huge amount of data, use pipes with a 32kB block size.

System information

CPU : Intel Celeron III (Coppermine) 

RAM  : 256MB 

Kernel : Linux 2.2.18

I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.

https://web.archive.org/web/20160401124744/https://sites.google.com/site/rikkus/sysv-ipc-vs-unix-pipes-vs-unix-sockets

added 43 characters in body
Source Link
shuyan
  • 109
  • 1
  • 6

For local processes communication pipes are definitely faster than sockets. Here is a benchmark or cached here.

SYSV IPC vs. UNIX pipes vs. UNIX sockets Blockquote

Blockquote

Latency test Samples: 1 million

Method Average latency (us) SYSV IPC msgsnd/rcv 7.0 UNIX pipe 5.9 UNIX sockets 11.4 Bandwidth test Samples: 1 million Data size: 1 kB Block size: 1 kB

Method Average bandwidth (MB/s) SYSV IPC msgsnd/rcv 108 UNIX pipe 142 UNIX sockets 95 Notes msgsnd/rcv have a maximum block size: on my system it’s about 4kB. Performance increases as block size is raised towards the ceiling. The highest bandwidth I could achieve was 284 MB/s, with a block size of 4000 bytes and a data size of 2MB. Performance dropped off slightly as the data size was decreased, with 4kB of data giving a bandwidth of 266 MB/s.

I don’t know what block size my system uses internally when transferring data through a pipe, but it seems a lot higher than 4kB. Using a block size of 32kB, I could achieve over 500 MB/s. I tested this with various data sizes from 32kB to 32MB and each time achieved 400-550 MB/s. Performance tailed off as the data and block sizes were decreased, similarly as the block size was raised.

Unix socket performance is much better with a higher block size than 1kB. I got best results (134 MB/s) with 2kB blocks, 4kB data size. This is comparable with UNIX pipes.

I’m not sure if my testing methods are perfect. Bandwidth testing seems fairly straightforward, but I kind of guessed at how to test latency. I just sent 1 character back and forth between two processes living at either end of a fork().

One factor I didn’t test is the time taken to bind() a UNIX socket and connect() to the server. If you keep connections open, it’s obviously not significant.

Conclusion On my system, UNIX pipes give higher bandwidth and lower latency than SYSV IPC msgsnd/rcv and UNIX sockets, but the advantage depends on the block size used. If you are sending small amounts of data, you probably don’t need to worry about speed, just pick the implementation that suits you. If you want to shift a huge amount of data, use pipes with a 32kB block size.

System information CPU : Intel Celeron III (Coppermine) RAM
: 256MB Kernel : Linux 2.2.18

I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.

https://web.archive.org/web/20160401124744/https://sites.google.com/site/rikkus/sysv-ipc-vs-unix-pipes-vs-unix-sockets

For local processes communication pipes are definitely faster than sockets. Here is a benchmark.

I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.

For local processes communication pipes are definitely faster than sockets. Here is a benchmark or cached here.

SYSV IPC vs. UNIX pipes vs. UNIX sockets Blockquote

Blockquote

Latency test Samples: 1 million

Method Average latency (us) SYSV IPC msgsnd/rcv 7.0 UNIX pipe 5.9 UNIX sockets 11.4 Bandwidth test Samples: 1 million Data size: 1 kB Block size: 1 kB

Method Average bandwidth (MB/s) SYSV IPC msgsnd/rcv 108 UNIX pipe 142 UNIX sockets 95 Notes msgsnd/rcv have a maximum block size: on my system it’s about 4kB. Performance increases as block size is raised towards the ceiling. The highest bandwidth I could achieve was 284 MB/s, with a block size of 4000 bytes and a data size of 2MB. Performance dropped off slightly as the data size was decreased, with 4kB of data giving a bandwidth of 266 MB/s.

I don’t know what block size my system uses internally when transferring data through a pipe, but it seems a lot higher than 4kB. Using a block size of 32kB, I could achieve over 500 MB/s. I tested this with various data sizes from 32kB to 32MB and each time achieved 400-550 MB/s. Performance tailed off as the data and block sizes were decreased, similarly as the block size was raised.

Unix socket performance is much better with a higher block size than 1kB. I got best results (134 MB/s) with 2kB blocks, 4kB data size. This is comparable with UNIX pipes.

I’m not sure if my testing methods are perfect. Bandwidth testing seems fairly straightforward, but I kind of guessed at how to test latency. I just sent 1 character back and forth between two processes living at either end of a fork().

One factor I didn’t test is the time taken to bind() a UNIX socket and connect() to the server. If you keep connections open, it’s obviously not significant.

Conclusion On my system, UNIX pipes give higher bandwidth and lower latency than SYSV IPC msgsnd/rcv and UNIX sockets, but the advantage depends on the block size used. If you are sending small amounts of data, you probably don’t need to worry about speed, just pick the implementation that suits you. If you want to shift a huge amount of data, use pipes with a 32kB block size.

System information CPU : Intel Celeron III (Coppermine) RAM
: 256MB Kernel : Linux 2.2.18

I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.

https://web.archive.org/web/20160401124744/https://sites.google.com/site/rikkus/sysv-ipc-vs-unix-pipes-vs-unix-sockets

deleted 1 character in body
Source Link
shuyan
  • 109
  • 1
  • 6

For local processes communication pipes are definitely faster than sockets. ThereHere is a benchmark.

I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.

For local processes communication pipes are definitely faster than sockets. There is a benchmark.

I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.

For local processes communication pipes are definitely faster than sockets. Here is a benchmark.

I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.

deleted 66 characters in body
Source Link
shuyan
  • 109
  • 1
  • 6
Loading
added 12 characters in body
Source Link
shuyan
  • 109
  • 1
  • 6
Loading
deleted 12 characters in body
Source Link
shuyan
  • 109
  • 1
  • 6
Loading
added 18 characters in body
Source Link
shuyan
  • 109
  • 1
  • 6
Loading
added 56 characters in body
Source Link
shuyan
  • 109
  • 1
  • 6
Loading
Source Link
shuyan
  • 109
  • 1
  • 6
Loading