[Sheepdog] Sheepdog performance

MORITA Kazutaka morita.kazutaka at lab.ntt.co.jp
Tue Apr 5 15:32:15 CEST 2011

At Tue, 5 Apr 2011 09:55:00 +0100,
Chris Webb wrote:
> Hi. I'm doing some testing with an initial version of our infrastructure
> ported to run over Sheepdog 0.2.2. I'm seeing some performance problems I
> don't remember seeing when I tested before, a few months back, and wondered
> whether I'm doing something obviously wrong!
> I've set up a single host with six SATA drives, made a btrfs on each drive,
> mounted in /sheep/{0,1,2,3,4,5} with default mount options, and run sheep on
> ports 7000 -> 70005 for each of these mount points. These drives are
> reasonably fast (80MB/s or so), and independent of one another---this isn't
> the obviously bad configuration of six store directories all on the same
> backing devices!

It should be the ideal environment to test multiple sheep daemons on
one server.

> A 10GB file in (say) /sheep/0/ used as a raw drive image with
>   -drive file=/sheep/0/,if=ide,index=0,cache=none
> gets reasonable performance of around 52MB/s doing a simple
>   dd if=/dev/zero of=/tmp/test bs=1M count=500 oflag=direct
> test from within the guest. However, if I create a sheepdog drive by
>   collie cluster format [default x3 data replication]
>   qemu-img convert /sheep/0/test.img sheepdog:test [takes hours]
> and then start qemu with
>   -drive file=sheepdog:test,if=ide,index=0
> I'm only getting around 5MB/s with the same write test.
> I see similarly poor (perhaps a little better) performance with ext4 +
> user_xattr.
> Should I be mounting the filesystems with options other than the defaults,
> or is there a bottleneck I'm not aware of with multiple sheep on a single
> host, despite the independent drives? Is there a good way to find out what
> the bottleneck really is here?

I'm not familiar with btrfs mount options, but if a raw image show a
good performance on the same file system, I think this is a problem of
Sheepdog.  To be honest, I don't have the slightest idea why Sheepdog
shows such a bad results in your environment.

Would you test the following settings on the same host?
 1) 1 sheep daemon and no data replication (add "-c 1" to format options)
 2) 6 sheep daemons and no data replication

If 2) shows a good performance, there should be problems in Sheepdog
replication flow.  If 1) shows a good performance but 2) is not, I
think there would be unknown problems to run multiple daemons on the
same host (or something goes wrong on one of your disks, perhaps).

> PS Is btrfs still the recommended configuration for reliable recovery, or is
> that recommendation no longer applicable? The code and documentation no
> longer mentions it, but I remember at one stage it was needed for atomic
> filesystem transactions.

Currently, we don't need btrfs for Sheepdog.  Sheepdog supports
journaling internally, so we can safely update vdi objects.



More information about the sheepdog mailing list