[Sheepdog] Sheepdog performance
morita.kazutaka at lab.ntt.co.jp
Thu Apr 7 03:04:51 CEST 2011
At Tue, 5 Apr 2011 17:02:25 +0100,
Chris Webb wrote:
> MORITA Kazutaka <morita.kazutaka at lab.ntt.co.jp> writes:
> > I'm not familiar with btrfs mount options, but if a raw image show a
> > good performance on the same file system, I think this is a problem of
> > Sheepdog. To be honest, I don't have the slightest idea why Sheepdog
> > shows such a bad results in your environment.
> > I think a barrier option makes a huge difference.
> Hi Kazutaka. I've just reformatted with ext4 and performed an initial two
> tests: with and without barrier=0, using the existing triple-replicated
> configuration and using the default ext4 data=ordered mode.
> With barriers on (default ext4), I still see 5-6MB/s write performance on
> unallocated blocks, and around 10MB/s write performance rewriting those
> blocks once they've been allocated.
> Turning barriers off, this becomes more like 53MB/s (on unallocated blocks)
> and 60MB/s rewriting already allocated blocks. This is a very dramatic
> difference, as you predicted!
I've seen similar discussions:
> The btrfs results are presumably 5-6MB/s because barriers are enabled by
> default there too. (There was no different between allocated and unallocated
> blocks in btrfs, presumably because CoW behaviour means blocks always have
> to be allocated afresh when writing to btrfs files, with the exception of
> O_DIRECT access that sheepdog doesn't use.)
> Turning off barriers essentially means that write-ordering to disk is no
> longer guaranteed in a power-failure situation. Isn't this likely to cause
> corruption on a sheepdog cluster in the same way as a traditional
> filesystem, or are there mechanisms in place to avoid this effect at a
> higher level? What is it that sheepdog does that makes barriers so
> prohibitively expensive (factor of 10+) on sheep file stores? Very heavy
> filesystem metadata update?
If you use a cache=writethrough option (default), I think you can see
that a raw image also shows poor performance (though I don't know the
reason well). In my environment:
raw 11.4 MB/s
qcow2 11.4 MB/s
sheepdog (1 node) 10.5 MB/s
raw 77.0 MB/s
qcow2 65.9 MB/s
sheepdog (1 node) 10.5 MB/s
The sheepdog driver doesn't support a cache flag, so its performance
is still bad even when we use cache=none (O_DIRECT). I added partial
support for O_DIRECT in patches I just sent.
After patching them:
add "-D" to sheep command line arguments
sheepdog (1 node) 38.8 MB/s
sheepdog (1 node) 62.7 MB/s
The patches are also in
In the patches, Sheepdog uses O_DIRECT only for data objects, so the
performance of metadata operations is still bad.
In future, we need to define what the cache flags means for Sheepdog
and implement it.
More information about the sheepdog