[Sheepdog] Sheepdog performance

Haven haven at thehavennet.org.uk
Tue Apr 5 11:57:29 CEST 2011


I've got a simple 2 node cluster mounted on ext4 and hosting a single
virtual (hosted on the same node as one of the cluster)

Running the same test here on the virtual I'm getting:
524288000 bytes (524 MB) copied, 15.2004 s, 34.5 MB/s

Running that on the underlying drive of one of the cluster I get:
524288000 bytes (524 MB) copied, 7.54742 s, 69.5 MB/s

I'm mounting the sheepdog partition using ext4 on an LVM partition using:
noatime,barrier=0,user_xattr,data=writeback

This is over a fairly standard gigabit network.

Not sure how much that helps you but it may prove of use.

Regards

Simon

> Hi. I'm doing some testing with an initial version of our infrastructure
> ported to run over Sheepdog 0.2.2. I'm seeing some performance problems I
> don't remember seeing when I tested before, a few months back, and
> wondered
> whether I'm doing something obviously wrong!
>
> I've set up a single host with six SATA drives, made a btrfs on each
> drive,
> mounted in /sheep/{0,1,2,3,4,5} with default mount options, and run sheep
> on
> ports 7000 -> 70005 for each of these mount points. These drives are
> reasonably fast (80MB/s or so), and independent of one another---this
> isn't
> the obviously bad configuration of six store directories all on the same
> backing devices!
>
> A 10GB file in (say) /sheep/0/ used as a raw drive image with
>
>   -drive file=/sheep/0/,if=ide,index=0,cache=none
>
> gets reasonable performance of around 52MB/s doing a simple
>
>   dd if=/dev/zero of=/tmp/test bs=1M count=500 oflag=direct
>
> test from within the guest. However, if I create a sheepdog drive by
>
>   collie cluster format [default x3 data replication]
>   qemu-img convert /sheep/0/test.img sheepdog:test [takes hours]
>
> and then start qemu with
>
>   -drive file=sheepdog:test,if=ide,index=0
>
> I'm only getting around 5MB/s with the same write test.
>
> I see similarly poor (perhaps a little better) performance with ext4 +
> user_xattr.
>
> Should I be mounting the filesystems with options other than the defaults,
> or is there a bottleneck I'm not aware of with multiple sheep on a single
> host, despite the independent drives? Is there a good way to find out what
> the bottleneck really is here?
>
> Best wishes,
>
> Chris.
>
> PS Is btrfs still the recommended configuration for reliable recovery, or
> is
> that recommendation no longer applicable? The code and documentation no
> longer mentions it, but I remember at one stage it was needed for atomic
> filesystem transactions.
> --
> sheepdog mailing list
> sheepdog at lists.wpkg.org
> http://lists.wpkg.org/mailman/listinfo/sheepdog
>





More information about the sheepdog mailing list