[Stgt-devel] iSER tgtd performance
Pete Wyckoff
pw
Thu Oct 11 20:38:29 CEST 2007
nejc.porenta at gmail.com wrote on Wed, 10 Oct 2007 10:35 +0200:
> Yes I have DDR IB cards (and I figured out that that might be the issue).
>
> I made some additional tests with disktest
> (http://www.open-iscsi.org/bits/disktest.tar.gz). I have added 4GB RAM
> to target machine so now it has 8GB and created several ramdisks and
> testing bandwidth throughput with several clients on same client
> machine. Here are the results:
> - 2GB target: 2 clients - each 280MB/s, 3 clients - each 215MB/s
> - 1GB target: 1 client - 900MB/s, 2 clients - each 350MB/s, 3 clients 238MB/s
>
> each client had different LUN target.
Wow. That's not so good how it degrades. These clients are in
different VMs running on the same host? I have never tried such
a configuration. Could it be issues with accessing the NIC?
With two programs, both submitting SCSI commands to the same
kernel, I have not seen any degradation. But I only tried a single
LUN.
Be sure to watch "vmstat 1" or the blinkie lights on the disk to
make sure you are not falling out of RAM somehow.
Remember you can remove the effect of the RAM disk and OS on the
target by disabling read/write as I mentioned earlier.
> I am planning to do some testing with IOzone or bonnie++, so results
> might follow...
>
> I use disktest like that:
> ./disktest -PT -T120 -pr -h1 -K1 -B256K -IB /dev/sdb
>
> I also tried testing target from 2 different machines:
> - 3GB target: 1 client - 440MB/s, 2 clients - each 420MB/s
> - 2GB target: 1 client - 470MB/s, 2 clients - each 470MB/s
> - 1GB target 1 client - 900MB/s, 2 clients - each 850MB/s
Yeah, this is more what one would expect. Will be interested to see
if you manage further to narrow down the issue with large RAM disks
being significantly slower like that. I can't think of anything in
the target itself that would cause that---maybe some feature of the
linux VM, or chipset IO TLBs (if you have those), or NIC memory
management and connection switching.
> I had some troubles with HeaderDigest at the beginning so I turned it
> to none and it stayed in my scripts. I tried it now and it is not
> working without it
Thanks for this feedback. Might be nice for someone to patch up
open-iscsi to disable digests if rdma transport_name is selected.
No reason to force the user to do this step.
> I have also tried tgtd with iSER with TCP connection over IPoIB but
> disktest reported read errors. From the begining everything went just
> fine, but afterwards the read requests started to fail and the
> performance dropped significantly, so I tried iSCSI enterprise target
> software for TCP and it worked well, so I believe that there might be
> a bug in tgtd software but I am unable to reproduce it, because if I
> turn on the debug option, the performance drops because of 100% CPU
> usage on the target. And believe it or not, everything works if CPU
> usage on target is 100%, so there are some issues with highspeed tgtd
> IPoIB target :)
tgtd on TCP on IPoIB does not use iSER. Just normal TCP. Unless
you forgot to change the initiator's node.transport_name back to tcp.
Would definitely like to hear more about these non-RDMA problems in
tgtd. Everything works fine here over 1 Gb/s ethernet TCP and IPoIB
TCP. Though the latter gets only around 2 Gb/s (with logging off).
If you can narrow things down a bit, it would be helpful.
-- Pete
More information about the stgt
mailing list