[Sheepdog] Dividing objects across disks
morita.kazutaka at lab.ntt.co.jp
Tue Aug 9 12:11:55 CEST 2011
At Mon, 8 Aug 2011 18:38:27 +0100,
Brian Candler wrote:
> On Tue, Aug 09, 2011 at 01:27:14AM +0900, MORITA Kazutaka wrote:
> > [on node A]
> > $ sheep /store_device/0 --zone 1
> > $ sheep /store_device/1 --zone 1
> > [on node B]
> > $ sheep /store_device/0 --zone 2
> > $ sheep /store_device/1 --zone 2
> > [on node C]
> > $ sheep /store_device/0 --zone 3
> > $ sheep /store_device/1 --zone 3
> > The data is not replicated in the same zone, so you can ensure that
> > the data is replicated to separate physical nodes.
> > Does this work for you?
> Yes, that would work fine, at the cost of some complexity in management -
> having to make each server be its own zone cancels some of the plug-and-play
> benefit of sheepdog.
If the server daemon uses a node specific data (e.g. the ip address)
as a default zone id, we don't need to specify the zone id, do we?
I'll send a patch to support it.
> I was thinking that perhaps if it stored objects under
> where x was a value 0-f based on a hash of the oid - then I could use
> symlinks to point half the storage to one disk and half to the other.
> It could even be made to look in both /obj<x>/ and /obj/ to allow
> transparent migration from one structure to the other. (Exim has a similar
> approach for its split_spool_directory option, which splits the mail queue
> into 62 subdirs)
The approach also looks interesting.
> sheepdog mailing list
> sheepdog at lists.wpkg.org
More information about the sheepdog