[sheepdog] [sheepdog-users] sheepdog 0.9 vs live migration

Bastian Scholz nimrodxx at gmx.de
Tue Dec 16 12:14:44 CET 2014


Hi Hitoshi,

sorry, second try to send to the list...

Am 2014-12-16 10:51, schrieb Hitoshi Mitake:
>> if I remove the VDI lock the live migration works correctly:
>> $ dog vdi lock unlock test-vm-disk
>> 
>> but after the live migration I can't relock the VDI.

> Thanks for your report. As you say, live migration and vdi locking
> seem to be conflicted. I'll work on it later. But I'm not familiar
> with qemu's live migration feature, so it would take time. Could you
> add an issue to our launchpad tracker for remainder?

You had two qemu instances temporarily running which accesses
the same vdi on different hosts.

Similar problem exists with drbd (kind of RAID 1 over network
on two nodes), which let only one node ("Primary" node) access
the drbd device in default configuration. But for live
migration both nodes (or better both qemu instances) need
access to the drbd device.

For this, drbd has a "dual primary mode" which can be enabled
by the drbd admin utility temporarily by a command line switch.
I let handle this task to libvirt in my environment by writing
a simple hook script for libvirt, which enabled the dual primary
mode when a live migration starts and disables it when finished.

For sheepdog it would be nice (at least for me), if it is
possible to unlock the vdi, migrate the guest to a new node and
lock the vdi again (Dont know if this is possible to implement
in sheepdog; Allow a second "lock" to an already locked vdi and
clear the old lock automatically after the old qemu is destroyed?)

Cheers

Bastian




More information about the sheepdog mailing list