[Sheepdog] Few things

krimson krims0n32 at gmail.com
Mon May 23 20:45:51 CEST 2011


I've been trying to reproduce this but managed to do so only once. 
However, I can reproduce a similar error with a different scenario and 
now have the feeling it is some sort of timing issue. Here's what I do:

root at styx:~# killall sheep
root at styx:~# sheep /sheep
root at styx:~# collie vdi list
   name        id    size    used  shared    creation time   vdi id
------------------------------------------------------------------
failed to read a inode header 1131155, 0, 42
failed to read a inode header 2701199, 0, 42
failed to read a inode header 13864185, 0, 42

But then the next time I repeat the list command (after say one second) 
it works fine:

root at styx:~# collie vdi list
   name        id    size    used  shared    creation time   vdi id
------------------------------------------------------------------
   deb01        1  8.0 GB  136 MB  0.0 MB 2011-05-23 20:38   114293
   xxx          1  5.0 GB  0.0 MB  0.0 MB 2011-05-23 20:37   29378f

Does this make any sense ? Hopefully you can reproduce this. I am using 
a git checkout from yesterday and
currently testing with a 2 node setup (copies=2), on an ext4 filesystem 
on both nodes with user_xattr flag. sheep.log does not show anything 
except for an "accepted" and "closed" message when I do the list command.

Thanks !

On 05/23/2011 11:13 AM, MORITA Kazutaka wrote:
> At Sun, 22 May 2011 13:38:19 +0200,
> krimson wrote:
>> Just ran into another issue, if I abort this operation:
>>
>> qemu-img convert /dev/vmvg/ub01 sheepdog:ub01
>>
>> this seems to result in corruption:
>>
>> # collie vdi list
>>     name        id    size    used  shared    creation time   vdi id
>> ------------------------------------------------------------------
>> failed to read a inode header 2701199, 0, 2
>> failed to read a inode header 10955677, 0, 2
>> failed to read a inode header 13864185, 0, 2
>>
>> I have to collie cluster format before I can use the store again.
> In my environment, this problem couldn't happen.  How many physical
> machines do you use for Sheepdog?  What is the number of redundancy?
> Does this problem happen always?  Does the sheep.log contain any error
> messages?
>
> Thanks,
>
> Kazutaka




More information about the sheepdog mailing list