[sheepdog-users] Vdi corruption with erasure code if n nodes < x

Andrew J. Hobbs ajhobbs at desu.edu
Thu Oct 9 17:57:50 CEST 2014


I'd stick with normal copies for three servers, and only switch to 
erasure when you have more servers.

On 10/09/2014 10:13 AM, Micha Kersloot wrote:
> Hi Andrew,
>
> thank you for your answer. So it seems that using 3 nodes and using copies=2:1 is not really stable at the moment? Do you think copies=3 with 3 nodes is allright, or should I really need to fetch an extra node to get things stable?
>
> Met vriendelijke groet,
>
> Micha Kersloot
>
> Blijf op de hoogte en ontvang de laatste tips over Zimbra/KovoKs Contact:
> http://twitter.com/kovoks
>
> KovoKs B.V. is ingeschreven onder KvK nummer: 11033334
>
> ----- Original Message -----
>> From: "Andrew J. Hobbs" <ajhobbs at desu.edu>
>> To: sheepdog-users at lists.wpkg.org
>> Sent: Thursday, October 9, 2014 3:48:06 PM
>> Subject: Re: [sheepdog-users] Vdi corruption with erasure code if n nodes < x
>>
>> Likely.  In production, we have 8 servers and use copies=3.  Generally,
>> we have not experienced corruption.  I suspect there are some corner
>> cases when you drop below the minimum copies number where the recovery
>> fails. Certainly we know there are a couple of issues in recovery in
>> regards to erasure encoding to be fixed.
>>
>> On 10/09/2014 06:05 AM, Micha Kersloot wrote:
>>> Hi,
>>>
>>> ----- Original Message -----
>>>> From: "Valerio Pachera" <sirio81 at gmail.com>
>>>> To: "Lista sheepdog user" <sheepdog-users at lists.wpkg.org>
>>>> Sent: Tuesday, October 7, 2014 5:06:23 PM
>>>> Subject: Re: [sheepdog-users] Vdi corruption with erasure code if n nodes
>>>> < x
>>>>
>>>> 2014-10-07 15:32 GMT+02:00 Hitoshi Mitake <mitake.hitoshi at gmail.com>:
>>>>> It doesn't works because of some bugs.
>>>>> I'll fix it so halting cluster or freezing VDIs wouldn't be required.
>>>>> # I'm on a business trip to U.S. so my reply and development is delaying,
>>>>> sorry
>>>> Thank for the update.
>>> would this explain the problems I have as f.e.:
>>>    10.4 % [=========>
>>>    ]
>>>    10 GB / 100 GB     object cb7f6400000a6b is inconsistent
>>>
>>> while I'm testing some failure scenarios? Somehow I seems very easy for me
>>> to end with corrupted vdi's.
>>>
>>> Met vriendelijke groet,
>>>
>>> Micha Kersloot
>>
>> --
>> sheepdog-users mailing lists
>> sheepdog-users at lists.wpkg.org
>> http://lists.wpkg.org/mailman/listinfo/sheepdog-users
>>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: ajhobbs.vcf
Type: text/x-vcard
Size: 353 bytes
Desc: ajhobbs.vcf
URL: <http://lists.wpkg.org/pipermail/sheepdog-users/attachments/20141009/aa1310b7/attachment-0005.vcf>


More information about the sheepdog-users mailing list