[sheepdog-users] failed to read inode header
terletskiy at emu.ru
Fri May 30 05:24:45 CEST 2014
When doing "vdi check" I see next messages:
"no majority of 3998c70000014d"
Is it bad? Image with this can be used?
30.05.2014 7:00, Maxim Terletskiy пишет:
> Thanks for your answer.
> Looks like I was lucky enough. First I made tar backup of files on
> that disk and plugged it to sheep. But sheep deleted all data. :( I
> emmideately unplugged disk and recovered data from backup. After that
> I made "cluster shutdown", two nodes died as expected, but on two
> nodes sheep process does not exited(and no log messages why), killed
> it manually. After that I started cluster. Node where I attached that
> disk told me about "failed to read inode header" on two or three mds.
> Deleted that inodes. Node started, cluster started to peering. After
> that I successfully recovered some test images with "vdi check".
> Before cluster shutdown "check" of this images told me about "no
> node", now check ended with no errors. I made "qemu-img convert" and
> counted md5 on that images and it was correct. So I started "cluster
> check", waited for three hours and now starting my vms. Not sure they
> are all ok, but at least couple of them already succesfully running.
> 30.05.2014 6:34, Liu Yuan пишет:
>> On Thu, May 29, 2014 at 10:25:14PM +0400, Maxim Terletskiy wrote:
>>> After some inspecting I found that all(I hope all) missing data
>>> pieces left on one disk which I forgot to plug after sheep restart.
>>> Can I just plug this disk to working sheep to access this data? Or I
>>> need to stop sheep daemon and run it again with this disk in
>>> options? Maybe there is some another safe method to return this data
>>> to cluster?
>> You can plug in disk on the fly without any problem. After plugging
>> the disk,
>> you need run
>> $ dog cluster reweight
>> which will recover the missing object onto other nodes.
>>> Sorry, if I'm asking too much questions. :) This data is extremely
>>> important for me and unfortunately I don't have any backups. :(
>> Probably you can check 'cluster snapshot' to make a backup of your
>> in case of emergence. Or qemu-img convert to backup individual vdi.
More information about the sheepdog-users