[sheepdog] Sheepdog kernel client
MORITA Kazutaka
morita.kazutaka at lab.ntt.co.jp
Mon Oct 1 14:49:18 CEST 2012
At Sun, 30 Sep 2012 11:48:58 +0100,
Chris Webb wrote:
>
> A colleague and I have been discussing the possibility of using Sheepdog for
> the storage backing physical hosts as well as qemu virtual machines. It
> feels like it wouldn't be particularly hard to take the relatively simple
> qemu <-> sheep protocol defined in qemu/block/sheepdog.c and write a kernel
> block device, perhaps based on the existing linux nbd driver.
>
> Whilst there aren't any obviously problems with mounting a block device
> backed on sheepdog outside the cluster, I'm worried about mounts of sheepdog
> block devices on hosts within the cluster, or even on a machine that's just
> acting as a gateway. Am I right that this is unlikely to work?
>
> I remember that loopback iscsi and nbd are very prone to deadlock under
> memory pressure, because more dirty pages need to be created to be able to
> progress with writing out the existing ones. Presumably a kernel sheepdog
> driver would suffer from the same problem, and it would be very hard to
> enable sheepdog hosts to mount filesystems on a cluster of which they're a
> part?
Probably, the answer is yes... I thought that it would be nice to
access a sheepdog gateway on localhost with an iSCSI protocol, but it
would lead to the deadlock problem.
>
> (But somehow, cluster filesystems like Gluster and Ceph have user mode
> storage servers and are still able to mount the filesystem on the same nodes
> as the storage. I'm puzzled that the same problem doesn't afflict them. Is
> there some technique they use to avoid deadlock that would be applicable to
> a Sheepdog kernel client?)
Seems that Ceph suffers from the same problem:
http://tracker.newdream.net/issues/3076
Thanks,
Kazutaka
More information about the sheepdog
mailing list