[sheepdog-users] [Sheepdog Announcement] Erasure coding is fully functional with Sheepdog now

Andrew J. Hobbs ajhobbs at desu.edu
Tue Oct 22 16:23:53 CEST 2013


Last question then:  Are there predefined stripe+parity settings that 
you would recommend for specific targetted redundancy or performance 
levels?  For example, if I have 7 nodes in a cluster, and I wanted to 
minimize redundant data storage while surviving up to three machines 
dropping out?

Would that by default be 4:3? Or for larger node counts?

On 10/22/2013 10:09 AM, Liu Yuan wrote:
> On Tue, Oct 22, 2013 at 02:00:19PM +0000, Andrew J. Hobbs wrote:
>> I love the work you've been doing for this but I do have some questions
>> regarding erasure.
>> In your notes, you mention specifying the policy such as 4:2, how does
>> this map to physical nodes?  Would that mean you could use that 4:2 for
>> 6 or more nodes?  How does that work exactly?  Would that mean the 16:15
>> configuration requires 31 nodes or more?
> Yes. 16:15 means we need at least 31 nodes to get it work.
> Suppose you have specified x:y scheme, then (x + y) strips that contains the
> data and parity will spread on (x + y) nodes as (x + y) objects in the cluster
> by hashing algorithm the same as for replicated objects.
>> If your node count exceeds the stripe/parity setting does the data get
>> rotated across nodes so all nodes contribute to throughput?
> Yes. All the earsured objects will be spread evently in the consistent hash ring
> that maps to virtual nodes of weighted physical nodes.
> The erarsured object placement policy goes the same as replicated objects.
> Thanks
> Yuan

-------------- next part --------------
A non-text attachment was scrubbed...
Name: ajhobbs.vcf
Type: text/x-vcard
Size: 353 bytes
Desc: ajhobbs.vcf
URL: <http://lists.wpkg.org/pipermail/sheepdog-users/attachments/20131022/d2f4167d/attachment-0005.vcf>

More information about the sheepdog-users mailing list