[stgt] understanding how to tune end-to-end
cbarry at rjmetrics.com
Sat Oct 13 05:20:54 CEST 2012
I've successfully built a Linux storage device for testing using tgt to
serve logical volumes on 1.5TB of SSDs in a RAID0 over iSER with IB as
the interconnect. This device was built to run and test kvm guests from.
I'm testing with RAID0 here just for the speed - redundancy is not
The hosts attach to the storage via iSER, and then expose these luns to
their kvm guests. Already the speeds are very impressive, but I'd like
to better understand how all of these layers interplay, so I can learn
to better tune the system.
I wrote a script to do a bunch of fio tests from a guest through to the
storage device. The guest has 4 vcpus and 12GB ram and uses virtio. The
tests try a bunch of block sizes and threads (1-4) to see how the
different settings relate to bandwidth, iops, etc. An online copy can be
seen here: http://snipurl.com/25a2m2n
the script I used is here: http://snipurl.com/25a2nan
if anyone is interested.
I'm not really sure what to tweak at what point in the stack to maximize
guest performance. Can anyone shed some light on this?
I realize there are many unrelated-to-tgt layers here, but thanks for
any advice you can give.
To unsubscribe from this list: send the line "unsubscribe stgt" in
the body of a message to majordomo at vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
More information about the stgt