0

I have machine with i7-2600K and I'm turning it into storage server. It's using P67 chipset which uses 20 Gb/s DMI interface to CPU. There are also pcie x8/x8 slots on board wired directly to CPU. I thought about connecting 8 SATA drives to onboard controllers (using DMI) and 8 SATA drives via DELL Perc 6/i pci-e x8 controller, to export them via quad 10 Gb/s network interface using rr-bonding.

However I'm not sure if this CPU is even capable of pumping 40 Gb/s of i/o in theory, even if we ignore all bottlenecks. Is it possible to estimate maximum theoretical i/o throughput of CPU?

3
  • This CPU has 16 PCIe 2.0 lanes. No network card I found uses more than 8 lanes. Simple math shows it cannot support 40 Gb/s. Why bother looking at anything but bottlenecks?
    – Daniel B
    Commented May 16, 2018 at 18:40
  • @DanielB fair point, pci-e 2.0 x8 will support at best 32 Gb/s. Then question still scales to whether there is even theoretical chance to get it. I'm more concerned on whether system can get anywhere close to actual theoretical pci-e bandwidth or is it just number on paper without any relation to reality and real performance is like 16Gb/s at best or something.
    – Lapsio
    Commented May 16, 2018 at 20:19
  • Why not just get (I'm assuming 10Gbps pci-e) the intended card and throughput test it using jperf or similar? My guess is this CPU would be able to handle multiple 10Gbps cards simultaneously. But without all the additional h/w info, no one will be able to give you anything other than a guess. Commented May 16, 2018 at 21:22

1 Answer 1

3

It's using P67 chipset which uses 20 Gb/s DMI interface to CPU. There are also pcie x8/x8 slots on board wired directly to CPU.

Incorrect, peripheral buses (e.g. PCIe and SATA) are not directly connected to the CPU.
Instead of a direct connection to the CPU, PCIe, SATA, USB, and memory have controllers (i.e. auxiliary logic) that interface to some kind of high-speed system bus. The CPU's address and data buses are typically directly connected to such a system bus.

Note that modern CPU chips are highly integrated (e.g. System-on-Chip, SoC, is possible), and the functionality of system chips (e.g. north/south-bridge chips) can be moved closer to the CPU for improved performance by tighter integration. Such CPU chips may have PCIe and SATA connections because they incorporate those controllers. But that does not mean that such peripherals are "wired directly' to any processor(s).

Is it possible to estimate maximum theoretical i/o throughput of CPU?

Yes, but throughput using programmed I/O is not a meaningful number.
Since modern computer systems typically perform I/O using 2nd- or 3rd-party DMA (rather than programmed I/O), the CPU is only involved at the start and end of the typical I/O operations.
IOW the CPU would not be the I/O bottleneck.
See https://stackoverflow.com/questions/25318145/dma-vs-interrupt-driven-i-o/38165400#38165400
and https://stackoverflow.com/questions/38119491/master-for-interrupt-based-uart-io/38155310#38155310
Therefore your question should be reworded to read:
"Is it possible to calculate theoretical I/O throughput of a computer?".

One upper bound for I/O operations would be memory bandwidth. Since I/O is always between the peripherals and main memory (while ignoring the rare use of peripheral-to-peripheral transfers using a bus master), memory speed can be a bottleneck.
Since main memory is typically much faster than any single peripheral, the issue is more likely to be contention for memory accesses by DMA controllers, bus masters, and the CPU, which needs to be arbitrated by the memory controller.

8
  • But that's not correct. Some PCIe slots are connected directly to the CPU.
    – Daniel B
    Commented May 16, 2018 at 21:35
  • I'll admit ignorance of the details of latest Intel processors. But a direct connection PCIe to the CPU makes no sense. That's basic, conventional computer architecture. How does bus mastering work? Please provide some proof. BTW since some CPU chips also integrate the memory controller, connection to the "CPU chip" doesn't necessarily mean direct connection to the CPU proper.
    – sawdust
    Commented May 16, 2018 at 21:51
  • 1
    @Lapsio -- I can see how that block diagram would lead to incorrect assumptions. But you need to distinguish between the actual CPU/processor and the "CPU chip". Wikipedia confirms my suspicions: "The memory, PCIe, SATA, and USB controllers are incorporated into the same chip as the processor cores." IOW what used to be in the "northbridge" chip are now integrated with the processor(s) in a "CPU chip". There is no direct connection to the CPU/processor; there are still peripheral/bus controllers involved. Thanks for the accept.
    – sawdust
    Commented May 17, 2018 at 1:07
  • 1
    Typically a system needs only one DMA controller, which is an actual device, and has a driver in the OS. In the original IBM PC, the DMAC was a distinct Intel chip. Use of this system DMAC is called 3rd-party DMA. DMA using a bus-master (e.g. a PCIe device that's capable) is called 2nd-party DMA. (Before you ask: memory would be the first-party.)
    – sawdust
    Commented May 17, 2018 at 1:45
  • 1
    The built-in PCIe root port is part of the System Agent, which is undoubtedly part of the CPU, just like the L2 cache is. See this article for details. Of course there is some interconnect involved, but then again: It always is.
    – Daniel B
    Commented May 17, 2018 at 14:48

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .