[Sheepdog] sheepdog image created but sheperd does not show it

Piavlo piavka at cs.bgu.ac.il
Mon Dec 28 21:15:54 CET 2009


Hi,

Now it completes fine.

Now some issues/questions mainly since I'm quite meticulous ,
while 7) & 8) seem to be bugs.

1) The first find that attracts attention now is that the a single image
blocks are not evenly distributed among the nodes:

shell>shepherd info -t sheep
Id      Size    Used    Use%
 0      97G     2G        2%
 1      96G     3G        3%
 2      98G     1G        1%

Total   292G    7G        2%, total virtual VDI Size    5G
shell>

one node has 904 blocks other 464 blocks and last one has 668 blocks,
is that expected?

2) The single image blocks are distributed among all nodes, while
/sheepdog/0/vdi/zopa is created on two nodes only.
What's the purpose of empty
/sheepdog/0/vdi/zopa/0000000000040000-00000000 file?

3) another find is the block numbering
40000 40001 40003 40004 40005 40007 40009 4000a ...
the block numbers 40002 40006 40008 40010 ... are missing on all nodes
so just wondering how actually the block numbers are incremented/allocated?

4) How do I delete a sheepdog image?

5) How do I rename a sheepdog image?

6) What is supposed to happen if i interrupt sheepdog image creation?

7) After I created several images and stop sheepdog all nodes and
started them later again, all VMs can be listed but then I try to create
another image I get

shell-srv1> kvm-img convert -f raw -O sheepdog /dev/sys/kvm-img foo
find_vdi_name 1041: Invalid error code, foo
find_vdi_name 1041: Invalid error code, foo
qemu-img: Could not open 'foo'
sheel-srv1>

shell-srv1>find /sheepdog/0/ -name '*1000*'
/sheepdog/0/vdi/foo/0000000000100000-00000000
/sheepdog/0/100000
shell-srv1>

shell-srv2> find /sheepdog/0/ -name '*1000*'
/sheepdog/0/vdi/foo/0000000000100000-00000000
shell-srv2>

shell-srv3> find /sheepdog/0/ -name '*1000*'
shell-srv3>

8) Started a VM with sheepdog image but it gets stuck during the boot
process:

...
[    0.752399] device-mapper: uevent: version 1.0.3
[    0.753687] device-mapper: ioctl: 4.15.0-ioctl (2009-04-01)
initialised: dm-devel at redhat.com
[    0.754893] cpuidle: using governor ladder
[    0.755450] cpuidle: using governor menu
[    0.759442] usbcore: registered new interface driver hiddev
[    0.760177] usbcore: registered new interface driver usbhid
[    0.760803] usbhid: v2.6:USB HID core driver
[    0.761431] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[    0.762095] virtio-pci 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI
11 (level, high) -> IRQ 11
[    0.764556] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10
[    0.765206] virtio-pci 0000:00:04.0: PCI INT A -> Link[LNKD] -> GSI
10 (level, high) -> IRQ 10
[    0.766695]  vda:
[    0.953310] input: ImExPS/2 Generic Explorer Mouse as /class/input/input2

 gets stuck now ... any ideas why?
this is what i see from strace of the kvm process

[pid 26260] write(7, "\0"..., 1)        = 1
[pid 26260] write(16, "\1\0\0\0\0\0\0\0"..., 8) = 8
[pid 26260] read(17, 0x7fff6dddf4d0, 128) = -1 EAGAIN (Resource
temporarily unavailable)
[pid 26260] timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0
[pid 26260] timer_settime(0, 0, {it_interval={0, 0}, it_value={0,
29000000}}, NULL) = 0
[pid 26260] select(20, [0 6 8 9 14 15 17 18 19], [], [], {1, 0}) = 2 (in
[6 15], left {0, 999997})
[pid 26260] read(15, "\1\0\0\0\0\0\0\0"..., 4096) = 8
[pid 26260] read(15, 0x7fff6ddde560, 4096) = -1 EAGAIN (Resource
temporarily unavailable)
[pid 26260] read(6, "\0"..., 512)       = 1
[pid 26260] read(6, 0x7fff6dddf360, 512) = -1 EAGAIN (Resource
temporarily unavailable)
[pid 26260] select(20, [0 6 8 9 14 15 17 18 19], [], [], {1, 0}) = 1 (in
[17], left {0, 970990})
[pid 26260] read(17,
"\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
128) = 128
[pid 26260] rt_sigaction(SIGALRM, NULL, {0x40a5a0, ~[KILL STOP RTMIN
RT_1], SA_RESTORER, 0x7f7c71ad9080}, 8) = 0

shell>lsof -p 26260 | awk '$4 ~ /^(6|7|15|16|17)/'
kvm     26260 root    6r  FIFO                0,8      0t0 558925 pipe
kvm     26260 root    7w  FIFO                0,8      0t0 558925 pipe
kvm     26260 root   15u  0000                0,9        0   3506 anon_inode
kvm     26260 root   16u  0000                0,9        0   3506 anon_inode
kvm     26260 root   17u  0000                0,9        0   3506 anon_inode
shell>

 Thanks
 Alex

MORITA Kazutaka wrote:
> On 12/27/2009 07:29 PM, Piavlo wrote:
>>  Hi,
>>
>> The patched sheepdog version now immediately fails with:
>>
>> shell-srv1>  kvm-img convert -f raw -O sheepdog /dev/sys/kvm-img zopa
>> find_vdi_name 1041: Invalid error code, zopa
>> find_vdi_name 1041: Invalid error code, zopa
>> qemu-img: Could not open 'zopa'
>> shell-srv1>
> 
> Sorry for inconvenience. Could you try the following?
> This depends on the patch I sent yesterday.
> 
> ==
> diff --git a/collie/vdi.c b/collie/vdi.c
> index 290d919..f2acc9d 100644
> --- a/collie/vdi.c
> +++ b/collie/vdi.c
> @@ -170,10 +170,6 @@ int lookup_vdi(struct cluster_info *ci,
>  		nr_reqs = nr_nodes;
>  
>  	memset(&req, 0, sizeof(req));
> -	copies = rsp->copies;
> -	nr_reqs = copies;
> -	if (nr_reqs > nr_nodes)
> -		nr_reqs = nr_nodes;
>  
>  	req.opcode = SD_OP_SO_LOOKUP_VDI;
>  	req.tag = tag;
> @@ -188,7 +184,10 @@ int lookup_vdi(struct cluster_info *ci,
>  
>  	dprintf("looking for %s %lx\n", filename, *oid);
>  
> -	return ret;
> +	if (ret < 0)
> +		return rsp->result;
> +
> +	return SD_RES_SUCCESS;
>  }
>  
>  /* todo: cleanup with the above */




More information about the sheepdog mailing list