[sheepdog] Sheepdog kernel client

Chris Webb chris at arachsys.com
Wed Oct 3 11:21:09 CEST 2012


MORITA Kazutaka <morita.kazutaka at lab.ntt.co.jp> writes:

> At Mon, 1 Oct 2012 13:53:55 +0100,
> Chris Webb wrote:
> > 
> > MORITA Kazutaka <morita.kazutaka at lab.ntt.co.jp> writes:
> > 
> > > Chris Webb wrote:
> > > > 
> > > > I remember that loopback iscsi and nbd are very prone to deadlock under
> > > > memory pressure, because more dirty pages need to be created to be able to
> > > > progress with writing out the existing ones. Presumably a kernel sheepdog
> > > > driver would suffer from the same problem, and it would be very hard to
> > > > enable sheepdog hosts to mount filesystems on a cluster of which they're a
> > > > part?
> > > 
> > > Probably, the answer is yes...  I thought that it would be nice to
> > > access a sheepdog gateway on localhost with an iSCSI protocol, but it
> > > would lead to the deadlock problem.
> > [...] 
> > > Seems that Ceph suffers from the same problem:
> > >   http://tracker.newdream.net/issues/3076
> > 
> > I wonder whether memory cgroups will eventually provide a mechanism that can
> > help here. Perhaps a restricted container on the host could safely access
> > the sheepdog-backed block device because it is constrained by memcg to never
> > be able to dirty enough pages that the host is unable to make progress.
> > Private filesystem namespaces could be used to explicitly ensure the mount
> > isn't accidentally touched outside the memcg restriction.
> 
> So sheeps can allocate memory without flushing pagecache of sheepdog
> VDIs?  Sounds like a reasonable way to cope with this problem.

I'm not sure whether or not this mechanism is sufficient to prevent the
problem, so this is just speculation really. However, it's the container
equivalent of running things in a VM on the host, so hopefully it will.

Cheers,

Chris.



More information about the sheepdog mailing list