[Stgt-devel] Strange throughput results with tgtd and iSCSI

Pete Wyckoff pw
Thu Feb 7 19:20:16 CET 2008


pw at osc.edu wrote on Thu, 07 Feb 2008 11:57 -0500:
> bart.vanassche at gmail.com wrote on Thu, 07 Feb 2008 17:04 +0100:
> > Results on Ethernet:
> > For a block size of 512 KB: write throughput of 43 MB/s.
> > For a block size of 1 MB: write throughput of 15 MB/s.
> 
> Redoing the tests at 1G keeps it all in RAM for me.  Then I get 95
> MB/s at 512 kB and 88 MB/s at 1MB, so confirming your performance
> disparity, but not finding numbers nearly so bad.

I looked at logs on tgtd.  These are writes.  With my kernel, the 1M
case just sends two 512k requests back-to-back due to initiator
block limits.  So there are an identical series of requests in both
cases.

The time between reception of a new command request and sending the
status response is constant for both 512k and 1M cases.  The time
when the target is idle varies, though.  Initially the idle time
between commands is identical in both 512k and 1M, until about
request number 2000 out of 2048.  Then a delay of 40 ms shows up
between commands.  What is the initiator doing here?

Turns out it is a TCP artifact.  I've been carrying around a patch
to tgtd to turn on O_NDELAY.  The numbers I reported above are for
stock tgtd.  With the O_NDELAY patch, these change to 95 MB/s at 512
kB (same) and 102 MB/s at 1M (better).  Presumably Bart runs for
longer (2 GB total), and sees more of these 40 ms delays.  I did not
analyze why the Nagle algorithm decides to hold packets when it
does, just was happy to turn it off.

I'll submit the patch to Tomo now.

		-- Pete



More information about the stgt mailing list