[Sheepdog] Sheepdog 0.3.0 schedule and 0.4.0 plan

Christoph Hellwig hch at infradead.org
Tue Nov 15 13:58:30 CET 2011


On Tue, Nov 15, 2011 at 08:47:24PM +0900, MORITA Kazutaka wrote:
> The key idea in the above link is that, when writeback is enabled, a
> gateway node can send write responses to VMs before replicating data
> to storage nodes.  Note that VM sends write requests to one of
> Sheepdog nodes (gateway node) first, and then the node replicates data
> to multiple nodes (storage nodes).  Even if we use this approach, the
> gateway node can send the unstable write requests to the storage nodes
> ASAP before receiving flush requests.  I think this reduces the write
> latency when we use Sheepdog on the WAN environment.

Okay, now I understand the idea.  Yes, this sounds like a useful idea
to me.

> If the gateway node writes data to the mmaped area before sending
> responses to VMs, we can regard the local mmapped file as Sheepdog
> disk cache - this is what I meant in the above link.  This approach
> may also reduce the read latency on the WAN environment.

Any idea why you care about a mmaped area specifically?  shared
writeable mmaps are a horrible I/O interface, most notably they don't
allow for any kind of error handling.  I would absolutely advice against
using them for clustered storage.

Except for that the idea sounds fine - I suspect making the gateway
node use the same storage mechanism as "normal" endpoint nodes is going
to both make the code simpler and easier to debug.




More information about the sheepdog mailing list