[stgt] poor rbd performance
Dan Mick
dan.mick at inktank.com
Sat Aug 23 00:17:11 CEST 2014
Hello, name from a past life...
I wrote the original port to rbd, and there was very little attempt to
even consider performance, and certainly no study; it was and is a
proof-of-concept. I don't know offhand what may be at fault, but I know
it's a target-rich environment, because no one's ever gone hunting at
all to my knowledge.
Several have recommended making use of Ceph async interfaces; I don't
know how much of a win this would be, because stgt already has a pool of
worker threads for outstanding requests. I also don't know how hard it
is to monitor things like thread utilization inside stgt.
but I'm interested in the subject and can help answer Ceph questions if
you have them.
On 08/22/2014 05:55 AM, Wyllys Ingersoll wrote:
> Im seeing some disappointing performance numbers using the bs_rbd backend
> with a Ceph RBD pool backend over a 10GB Ethernet link.
>
> Read operations appear to max out at about 100MB/second, regardless of
> block size or amount of data being read and write operations fare much
> worse, maxing out somewhere in the 40MB/second range. Any ideas why this
> would be so limited?
>
> I've tested using 'fio' as well as some other perf testing utilities. On
> the same link, talking to the same ceph pool/image, using librados directly
> (either through the C or Python bindings), the read performance is 5-8x
> faster and write performance is 2-3x faster.
>
> Any suggestions as to how to tune the iSCSI or bs_rbd interface to perform
> better?
>
> thanks,
> Wyllys Ingersoll
> --
> To unsubscribe from this list: send the line "unsubscribe stgt" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe stgt" in
the body of a message to majordomo at vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
More information about the stgt
mailing list