[Sheepdog] Sheepdog+iscsi high availability
Huxinwei
huxinwei at huawei.com
Mon Apr 16 11:37:51 CEST 2012
When the fail-over failed, have you used the hook for ucarp to restart the scsi target on ‘node b’?
Also, do you have logs from both target nodes. It’ll be very helpful.
Thanks.
From: sheepdog-bounces at lists.wpkg.org [mailto:sheepdog-bounces at lists.wpkg.org] On Behalf Of joby xavier
Sent: Monday, April 16, 2012 4:59 PM
To: sheepdog at lists.wpkg.org
Subject: [Sheepdog] Sheepdog+iscsi high availability
HI,
We would like to set up a iscsi high availability with sheepdog distributed
storage .
Here is our system set up: OS - Ubuntu. Four nodes with sheepdog
distributed storage and we are sharing this storage through iscsi using
two nodes as well as using a virtual ip set up using ucarp.Two nodes are
using same iqn. And mounted the iscsi storage as lvm partition (sdc)
node a
node b
node c
node d
node x is the initiator
node a and b having common virtual ip because if 'node a' fails 'node
b' should serve as iscsi target, both have same iqn.
Problem: when a failover happens ie iscsi switching from node one to
two, the iscsi disk fails on initiator 'node x'
Here is the /var/log/messeage
Apr 16 10:57:14 prox1 kernel: scsi7 : iSCSI Initiator over TCP/IP
Apr 16 10:57:14 prox1 kernel: scsi 7:0:0:0: RAID IET Controller 0001 PQ: 0 ANSI: 5
Apr 16 10:57:14 prox1 kernel: scsi 7:0:0:1: Direct-Access IET VIRTUAL-DISK 0001 PQ: 0 ANSI: 5
Apr 16 10:57:14 prox1 kernel: sd 7:0:0:1: [sdc] 2252800 512<tel:2252800%20512>-byte logical blocks: (1.15 GB/1.07 GiB)
Apr 16 10:57:14 prox1 kernel: sd 7:0:0:1: [sdc] Write Protect is off
Apr 16 10:57:14 prox1 kernel: sd 7:0:0:1: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Apr 16 10:57:14 prox1 kernel: sdc: unknown partition table
Apr 16 10:57:14 prox1 kernel: sd 7:0:0:1: [sdc] Attached SCSI disk
Apr 16 10:59:47 prox1 kernel: connection2:0: detected conn error (1020)
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Unhandled sense code
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Result: hostbyte=invalid driverbyte=DRIVER_SENSE
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Sense Key : Medium Error [current]
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Add. Sense: Unrecovered read error
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Unhandled sense code
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Result: hostbyte=invalid driverbyte=DRIVER_SENSE
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Sense Key : Medium Error [current]
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Add. Sense: Unrecovered read error
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Unhandled sense code
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Result: hostbyte=invalid driverbyte=DRIVER_SENSE
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Sense Key : Medium Error [current]
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Add. Sense: Unrecovered read error
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] CDB: Read(10): 28 00 00 00 00 08 00 00 08 00
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Unhandled sense code
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Result: hostbyte=invalid driverbyte=DRIVER_SENSE
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Sense Key : Medium Error [current]
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Add. Sense: Unrecovered read error
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00
Apr 16 10:59:51 prox1 kernel: sd 7:0:0:1: [sdc] Unhandled sense code
root at prox1:~# pvdisplay
/dev/sdc: read failed after 0 of 4096 at 1153368064: Input/output error
/dev/sdc: read failed after 0 of 4096 at 1153425408: Input/output error
sheepdog with single node iscsi ( https://github.com/collie/sheepdog/wiki/General-protocol-support) works well
should we do anything on lvm.conf? should we use multipath-tools? is this the right procedure?
Thanks,
Joby Xavier
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.wpkg.org/pipermail/sheepdog/attachments/20120416/0fc76f18/attachment-0003.html>
More information about the sheepdog
mailing list