[sheepdog-users] help:I have a two nodes cluster, one node provided iscsi target used a vdi. when I kill tgtd and start it again, then the same vdi to setup iscsi target again, it say vdi lock. So I restart sheepdog, but when sheepdog started , I find my cluster canot use.
Hitoshi Mitake
mitake.hitoshi at lab.ntt.co.jp
Tue Mar 17 03:11:29 CET 2015
At Mon, 16 Mar 2015 13:35:59 +0800 (CST),
李文涛 wrote:
>
> [1 <multipart/alternative (7bit)>]
> [1.1 <text/plain; GBK (base64)>]
> Hello,
> The detial is,
> 1.Create a two nodes(nodeA, nodeB) cluster with corosync, setup iscsi target on nodeA .
> 2.Kill the node tgtd process. Start tgtd again and use same vdi to setup iscsi target, but failed(all on nodeA).
> ./tgtadm --op new --mode lu --tid 9 --lun 9 --bstype sheepdog --backing-store unix:/var/lib/sheepdog/sock:dog1
> tgtadm: invalid request
> log message:
> Mar 16 11:23:50 INFO [main] cluster_lock_vdi_main(1349) node: IPv4 ip:192.168.100.205 port:7000 is locking VDI (type: shared): e95708
> Mar 16 11:23:50 ERROR [main] add_new_participant(371) IPv4 ip:192.168.100.205 port:7000 is already locking e95708
> Mar 16 11:23:50 ERROR [main] cluster_lock_vdi_main(1352) locking e95708failed
In such a case, you need to execute
$ dog vdi lock unlock <your vdi name>
on node A. It is required for multipath.
>
> 3.So I restart sheepdog service on nodeA, but when it join to cluster failed , and nodeB log print the message as follow.
>
> Mar 11 13:00:56 EMERG [main] cdrv_cpg_confchg(550) PANIC: a number of leaving node (1) is larger than majority (1), network partition
> Mar 11 13:00:56 EMERG [main] crash_handler(268) sheep exits unexpectedly (Aborted).
> Mar 11 13:00:56 EMERG [main] sd_backtrace(833) sheep.c:270: crash_handler
> Mar 11 13:00:56 EMERG [main] sd_backtrace(847) /lib64/libpthread.so.0() [0x3f58e0f49f]
> Mar 11 13:00:56 EMERG [main] sd_backtrace(847) /lib64/libc.so.6(gsignal+0x34) [0x3f58232884]
> Mar 11 13:00:56 EMERG [main] sd_backtrace(847) /lib64/libc.so.6(abort+0x174) [0x3f58234064]
> Mar 11 13:00:56 EMERG [main] sd_backtrace(833) corosync.c:548: cdrv_cpg_confchg
> Mar 11 13:00:56 EMERG [main] sd_backtrace(847) /usr/lib64/libcpg.so.4(cpg_dispatch+0x451) [0x33c0c01d51]
> Mar 11 13:00:56 EMERG [main] sd_backtrace(833) corosync.c:677: corosync_handler
> Mar 11 13:00:56 EMERG [main] sd_backtrace(833) event.c:210: do_event_loop
> Mar 11 13:00:56 EMERG [main] sd_backtrace(833) sheep.c:963: main
> Mar 11 13:00:56 EMERG [main] sd_backtrace(847) /lib64/libc.so.6(__libc_start_main+0xfc) [0x3f5821ecdc]
> Mar 11 13:00:56 EMERG [main] sd_backtrace(847) sheep() [0x403ee8]
>
> 4.How I can slove this problem?
> By the way, When I restart sheepdog all nodes in the cluster twice , it will be OK.
> sheepdog version:v0.9.1
> OS:CentOS6.2
Seems that you have 2 replicated VDI, so the case is treated as
network partition. If you have more than 3 nodes, the problem can be
avoided.
Thanks,
Hitoshi
More information about the sheepdog-users
mailing list