0

I have two hosts. One is the iSCSI initiator. The other is the iSCSI target. They are both running Ubuntu 16.04 LTS and both are using open-iscsi.

Specifically, the target host (let's call it "target") is running the iscsid service. It has 16 SSD disks that it exports as the iSCSI targets. Its configuration is specified in the /etc/iscsi/iscsi.conf file.

Its targets are specified in the /etc/tgt/conf.d/config file. The config file has the information about the target's iqn name, the devices, and the list of initiators allowed to mount the targets.

We didn't use the iscsitarget package because it doesn't work with ISER.

Both the target and the initiator use port 3260. Both target and initiator communicate thru a dedicated Infiniband card that is on a separate Infiniband-only switch. The iSCSI connection does not go thru an Ethernet connection/switch.

Is there a way to have each of the iSCSI targets use a dedicated port?

The reason I want to do this is because I am testing performance on the Infiniband card. If I run performance tests concurrently on all the iSCSI targets, they each only get about 300MB/s throughput. 300MB * 16 is nowhere near the 40GB max throughput that the Infiniband card offers.

I suspect the bottleneck is the TCP/IP port connection. Even if I run performance tests on all 16 iSCSI targets in parallel, all of them go thru port 3260.

Is there a way to set up the iSCSI target so that each target uses a separate port? Target 0 can be on port 3261, Target 1 can be on port 3262, etc. and from the Initiator, I can specify the ip and port in the "iscsiadm -m discovery" and "iscsiadm -m node" command.

1 Answer 1

1

Infiniband would be 40 Gigabits per second. That's 5000 Megabytes per second. Your drives combined output (300MB/sec * 16) is 4800 Megabytes per second (38.4Gb/s).

Sounds like you are maxing out your Infiniband. I don't believe the common port concern is your issue. Unless I've misinterpreted your configuration?

3
  • Ryan, Thanks for your response.
    – SQA777
    Commented Jul 13, 2016 at 19:50
  • Ryan, Thanks for your response. I think I should have clarified that the 300MB is in megabits. 300MB * 16 = 4800MB. 1GB = 1024MB. 40GB=40960MB. I'm seeing a 1/10th of the performance I was expecting. Thus, I am trying to see if I can dedicate 1 iSCSI target to 1 port so that when I issue an iscsiadm command, I can specify the port after the IQN name.
    – SQA777
    Commented Jul 13, 2016 at 19:59
  • Have you tried other network performance testing to make sure you can reach 40Gbit (iperf or something)? You're sure that's 300Mbit per drive (you're maxing out at 4.8Gbit total?)? How are you testing? You're getting 37.5 Megabytes/sec per drive? What's a single drive give you when you aren't testing concurrently? Commented Jul 14, 2016 at 1:08

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .