0

I'm working in deep learning and I'm trying to identify a bottleneck in our GPU pipeline.

We're running Ubuntu on an Intel Xeon motherboard with 4 NVIDIA Titan RTXs. GPU utilization as measured by nvidia-smi seems to be pretty low even though the GPU memory usage is around 97%.

So I'm trying to see if the bus is the bottleneck.

I've downloaded PCM and I'm running it to monitor the PCIe 3.0 x16 traffic.

Processor Counter Monitor: PCIe Bandwidth Monitoring Utility 
 This utility measures PCIe bandwidth in real-time

 PCIe event definitions (each event counts as a transfer): 
   PCIe read events (PCI devices reading from memory - application writes to disk/network/PCIe device):
     PCIeRdCur* - PCIe read current transfer (full cache line)
         On Haswell Server PCIeRdCur counts both full/partial cache lines
     RFO*      - Demand Data RFO
     CRd*      - Demand Code Read
     DRd       - Demand Data Read
   PCIe write events (PCI devices writing to memory - application reads from disk/network/PCIe device):
     ItoM      - PCIe write full cache line
     RFO       - PCIe partial Write
   CPU MMIO events (CPU reading/writing to PCIe devices):
     PRd       - MMIO Read [Haswell Server only] (Partial Cache Line)
     WiL       - MMIO Write (Full/Partial)
...
Socket 0: 2 memory controllers detected with total number of 6 channels. 3 QPI ports detected. 2 M2M (mesh to memory) blocks detected.
Socket 1: 2 memory controllers detected with total number of 6 channels. 3 QPI ports detected. 2 M2M (mesh to memory) blocks detected.
Trying to use Linux perf events...
Successfully programmed on-core PMU using Linux perf
Link 3 is disabled
Link 3 is disabled
Socket 0
Max QPI link 0 speed: 23.3 GBytes/second (10.4 GT/second)
Max QPI link 1 speed: 23.3 GBytes/second (10.4 GT/second)
Socket 1
Max QPI link 0 speed: 23.3 GBytes/second (10.4 GT/second)
Max QPI link 1 speed: 23.3 GBytes/second (10.4 GT/second)

Detected Intel(R) Xeon(R) Gold 5122 CPU @ 3.60GHz "Intel(r) microarchitecture codename Skylake-SP" stepping 4 microcode level 0x200004d
Update every 1.0 seconds
delay_ms: 54
Skt | PCIeRdCur |  RFO  |  CRd  |  DRd  |  ItoM  |  PRd  |  WiL
 0      13 K        19 K     0       0      220 K    84     588  
 1       0        3024       0       0        0       0     264  
-----------------------------------------------------------------------
 *      13 K        22 K     0       0      220 K    84     852  

Ignore the actual values for a moment. I have much bigger values. :-)

How do I calculate the runtime bandwidth of the PCIe socket using Process Control Monitor?

Why are there only two sockets listed?

3
  • Why don't you just read the WikiPedia article about PCI Express?
    – zx485
    Commented Mar 27, 2019 at 22:44
  • @zx485 the link does not show me how to calculate runtime bandwidth using PCM. I'll edit my question.
    – empty
    Commented Mar 27, 2019 at 22:48
  • 1
    You may only be seeing 2 sockets as your MoBo Northbridge and/or CPU has only two PCIe controllers.
    – Brian
    Commented Mar 27, 2019 at 23:05

1 Answer 1

1

From github opcm:

socket refer to CPU sockets not PCIe slots or devices. You have a system with 2 CPU sockets, right? The --help output describes -B switch to output bandwidth per CPU socket (see also caveat here with partial operations). pci-iio.x is a different utility that shows bandwidth per PCIe device.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .