[sheepdog-users] Concern about sheepdog performance

Valerio Pachera sirio81 at gmail.com
Mon Dec 17 10:20:07 CET 2012

Hi, till now I've been focusing on the data integrity tests, but I
started also to look at the performance:
*I'm stuck at 10-12M/s.*

I have
- 3 'hp microserver' with sata hdd, gigabit network card, amd turion
dual core cpu
- 1 pc/server, intel i5, ssd disk, gigabit network.
All with debian wheezy and the same sheepdog version.

The test speed is made running a guest with systemrescuecd with a
single vdi disk.
>From the guest I create sda1 with xfs and run dd:
  dd if=/dev/zero of=/mnt/sda1/test bs=1M count=512

The cluster is formated with 2 copies

I noticed that:
1) using 2 or 4 nodes, doesn't change the performance
2) higher disk speed, doesn't increase the performance
3) moving journal to another device, doesn't increase the performance
4) enabling cache, doesn't increase the performance
5) using different qemu options, doesn't increase the performance
6) only writing twice the same file increase the performance
7) to preallocate the disk, doesn't increase the performance
8) bridge doesn't matter

1) no need of explanation.
2) host's disks are able to write more than 90M/s. On a node, I
created a raid0 between two disks (220M/s). On the second node,
there's and ssd/pci device able to write more than 600M/s.
    The guest still write no more than 12M/s.
3) I've been following these instruction
(https://github.com/collie/sheepdog/wiki/Journaling), but didn't help.
    sheep -j dir=/mnt/sdb1,size=256 /mnt/sheepdog
4) Enabling cache by -w object,size=256, we might gain a little:
14M/s, but I don't consider that significant.
5) I've tried
      cache=none/writethrough/writeback/unsafe, if=scsi/virtio/ide,
    No significant changes.
6)  dd if=/dev/zero of=/mnt/sda1/test bs=1M count=512   (10,5M/s)
     dd if=/dev/zero of=/mnt/sda1/test bs=1M count=512   (21 M/s)
     dd if=/dev/zero of=/mnt/sda1/anotherfile bs=1M count=512  (10,5M/s)
7)  Because of the above results, I thought that the speed of a
preallocated disks was 20M/s...but it's still ~10M/s.
8)  My nodes have a single nic and it was used in a bridge (eth0/br0).
     To be sure that the low performance were not related to the
bridge, I simply removed it.
      Nothing has changed.

How fast are you able to write on your cluster?
Is there something I didn't consider?

Thank you.

More information about the sheepdog-users mailing list