Tomasz Chmielewski wrote: > I made some simple performance tests of tgt and iSCSI-SCST. > > > Reading the array on a target machine to /dev/null gives me ~70 MB/s. > > When I read the same array on the initiator with tgt as a target, it > gives me ~25 MB/s. > > With iSCSI-SCST, I get ~35 MB/s (although I think I maxed out the PCI at > this point; on something more modern it could be better). Tomasz meant here that on his target both network and backstorage hardware sit on the same 32-bit 33MHz PCI and share its bandwidth, which is less, than 100MB/s. I.e. he can at max get ~40MB/s (since his array has 70MB/s) + sharing overhead. > I dropped caches before each test. > > > Also, CPU load is slightly bigger when tgtd is used as a target, > although I didn't do any precise measurements here. With tgt, both CPU > usage was more or less settled at the same level; with SCST CPU0 usage > jumped up and down by +/- 20%; CPU1 usage for SCST was more or less at > the same level. > It would be nice to know more on the nature of these CPU usage spikes > for SCST (and possibly, how it could affect i.e. reading from an > encrypted device-mapper volume). > > > tgt-git: > CPU0: ~30% > CPU1: ~90% > > SCST-r234: > CPU0: ~30% > CPU1: ~80% > > > > Read on the target: > > # dd if=/dev/sda of=/dev/null bs=64k count=50000 > 50000+0 records in > 50000+0 records out > 3276800000 bytes (3.3 GB) copied, 45.9789 seconds, 71.3 MB/s > > > > SCST-r234: > > # dd if=/dev/sdba of=/dev/null bs=64k count=50000 > 50000+0 records in > 50000+0 records out > 3276800000 bytes (3.3 GB) copied, 90.817 seconds, 36.1 MB/s > > > > tgt-git: > > # dd if=/dev/sday of=/dev/null bs=64k count=50000 > 50000+0 records in > 50000+0 records out > 3276800000 bytes (3.3 GB) copied, 140.123 seconds, 23.4 MB/s > > > > tgt-20071014: > > # dd if=/dev/sday of=/dev/null bs=64k count=50000 > 50000+0 records in > 50000+0 records out > 3276800000 bytes (3.3 GB) copied, 138.754 seconds, 23.6 MB/s > > > > > > BTW, is it possible to do a nullio test with tgt target? > > > |