Skip to main content

Timeline for Intel 10Gbps card bandwidth

Current License: CC BY-SA 3.0

11 events
when toggle format what by license comment
Sep 8, 2014 at 22:25 comment added Astara @DarthAndroid - re: $$ I know the feeling... I compromised. I ratcheted down my minimal setup to leave out a switch and only connect my home server & my desktop. Switches that supported teaming/bonding would have more than doubled my costs. But I could explore them using the dual cards (Intel x540t2). Unfortunately, I really can't get much if any benefit using 2 cards (insufficient CPU). Tried interrupt and process affinities among other things. But if you think of min as 2 cards, more affordable.
Sep 8, 2014 at 19:06 comment added Darth Android @Astara The numbers are from the linked article, in which the author managed to get 9.90Gbps over a 10Gbps link. I'm afraid most of my personal 10Gbps knowledge is theoretical due to the prices of 10Gbps hardware making it difficult to justify for home use (as much as I'd like to get some for my fileserver)
Sep 8, 2014 at 18:51 comment added Astara You said ("every hop along the way (switch/router) "). Starting w/the 10Gb speed, they don't support contention / collisions on the line -- so any interconnects must be switches (found that out today). Also, have seen article for PCI-X, but that bus is way old and you'd be hard pressed to run full speed w/1Gb. As for your figures above, are you using those w/10Gb? They look more like 1Gb figures. Maybe depends on memory, but on x64, I try to use larger buffs.
Sep 8, 2014 at 18:31 comment added Darth Android @Astara The PCI tuning is in the linked article, but is specific to PCI-X. If there is something similar for PCIe, I do not know it.
Sep 8, 2014 at 16:15 comment added Vigneshwaren Oh no. To be very clear. Both the systems (I had two identical boards of the same CPU variety) had one of the 10Gig cards connected to each board (I had two Intel 10Gigs as well). The boards then were connected to each other via an optic fibre cable with absolutely no other device in between (switches, routers, modems, the Internet etc.) And I would like to know how you tuned PCI block size as well. Kernel config maybe?
Sep 8, 2014 at 16:14 comment added Astara BTW..how do you do the 4k PCI block size tune? Never had luck w/that one (I use a 9k on the wire, but never had luck changing the pci BS)...
Sep 8, 2014 at 16:11 comment added Astara Nothing you mention above applies specifically to 10Gb ethernet. You didn't mention the most important reason for not getting full throughput and thats the 8b/10b (en.wikipedia.org/wiki/8b/10b_encoding) that knocks off 20% of your BW off the top.
Sep 8, 2014 at 16:09 comment added Darth Android @Vigneshwaren I assume you mean that you performed these optimizations on both of your systems? Or did you have two cards in the same system?
Sep 8, 2014 at 16:05 comment added Vigneshwaren Well, I previously ran across this document kernel.org/doc/ols/2009/ols2009-pages-169-184.pdf which listed all these optimizations. I had performed them on my system. Well initially I maxed out at 2.5Gbps and I pushed it to 3.3Gbps playing around with those values exactly. I guess I would have to look at those numbers again.
Sep 8, 2014 at 15:51 history edited Darth Android CC BY-SA 3.0
added 154 characters in body
Sep 8, 2014 at 15:45 history answered Darth Android CC BY-SA 3.0