[Sheepdog] Sheepdog performance

Chris Webb chris at arachsys.com
Tue Apr 5 10:55:00 CEST 2011

Hi. I'm doing some testing with an initial version of our infrastructure
ported to run over Sheepdog 0.2.2. I'm seeing some performance problems I
don't remember seeing when I tested before, a few months back, and wondered
whether I'm doing something obviously wrong!

I've set up a single host with six SATA drives, made a btrfs on each drive,
mounted in /sheep/{0,1,2,3,4,5} with default mount options, and run sheep on
ports 7000 -> 70005 for each of these mount points. These drives are
reasonably fast (80MB/s or so), and independent of one another---this isn't
the obviously bad configuration of six store directories all on the same
backing devices!

A 10GB file in (say) /sheep/0/ used as a raw drive image with

  -drive file=/sheep/0/,if=ide,index=0,cache=none

gets reasonable performance of around 52MB/s doing a simple

  dd if=/dev/zero of=/tmp/test bs=1M count=500 oflag=direct

test from within the guest. However, if I create a sheepdog drive by

  collie cluster format [default x3 data replication]
  qemu-img convert /sheep/0/test.img sheepdog:test [takes hours]

and then start qemu with

  -drive file=sheepdog:test,if=ide,index=0

I'm only getting around 5MB/s with the same write test.

I see similarly poor (perhaps a little better) performance with ext4 +

Should I be mounting the filesystems with options other than the defaults,
or is there a bottleneck I'm not aware of with multiple sheep on a single
host, despite the independent drives? Is there a good way to find out what
the bottleneck really is here?

Best wishes,


PS Is btrfs still the recommended configuration for reliable recovery, or is
that recommendation no longer applicable? The code and documentation no
longer mentions it, but I remember at one stage it was needed for atomic
filesystem transactions.

More information about the sheepdog mailing list