[sheepdog-users] Not all node are shown
Valerio Pachera
sirio81 at gmail.com
Mon Jul 7 12:27:57 CEST 2014
Hi, this is a problem I had long time ago when I was using corosync.
I see it after long time using zookeeper:
dog node list
[1] 12:13:57 [SUCCESS] test004
Id Host:Port V-Nodes Zone
0 192.168.10.4:7000 127 67807424
1 192.168.10.6:7000 129 101361856
2 192.168.10.7:7000 129 118139072
[2] 12:13:57 [SUCCESS] test005
Id Host:Port V-Nodes Zone
0 192.168.10.4:7000 127 67807424
1 192.168.10.5:7000 128 84584640
2 192.168.10.6:7000 128 101361856
3 192.168.10.7:7000 128 118139072
[3] 12:13:57 [SUCCESS] test006
Id Host:Port V-Nodes Zone
0 192.168.10.4:7000 127 67807424
1 192.168.10.6:7000 129 101361856
2 192.168.10.7:7000 129 118139072
[4] 12:13:57 [SUCCESS] test007
Id Host:Port V-Nodes Zone
0 192.168.10.4:7000 127 67807424
1 192.168.10.5:7000 128 84584640
2 192.168.10.6:7000 128 101361856
3 192.168.10.7:7000 128 118139072
These are the last 7 raws of sheep.log.
As you can see, they are different in the nodes showing all 4 nodes
(test005 and test007).
[1] 12:24:45 [SUCCESS] test004
Jul 07 12:10:41 INFO [main] recover_object_main(906) object recovery
progress 93%
Jul 07 12:10:41 INFO [main] recover_object_main(906) object recovery
progress 94%
Jul 07 12:10:41 INFO [main] recover_object_main(906) object recovery
progress 95%
Jul 07 12:10:41 INFO [main] recover_object_main(906) object recovery
progress 96%
Jul 07 12:10:41 INFO [main] recover_object_main(906) object recovery
progress 97%
Jul 07 12:10:41 INFO [main] recover_object_main(906) object recovery
progress 98%
Jul 07 12:10:41 INFO [main] recover_object_main(906) object recovery
progress 99%
[2] 12:24:45 [SUCCESS] test005
Jul 04 18:19:08 INFO [main] zk_leave(985) leaving from cluster
Jul 07 12:10:28 INFO [main] md_add_disk(343) /mnt/sheep/0, vdisk nr 220,
total disk 1
Jul 07 12:10:28 NOTICE [main] get_local_addr(519) found IPv4 address
Jul 07 12:10:28 INFO [main] send_join_request(828) IPv4 ip:192.168.10.5
port:7000
Jul 07 12:10:28 NOTICE [main] nfs_init(600) nfs server service is not
compiled
Jul 07 12:10:28 INFO [main] check_host_env(493) Allowed open files
1024000, suggested 6144000
Jul 07 12:10:28 INFO [main] main(942) sheepdog daemon (version
0.8.0_223_ge4735ba) started
[3] 12:24:45 [SUCCESS] test006
Jul 07 12:10:40 INFO [main] recover_object_main(906) object recovery
progress 94%
Jul 07 12:10:40 INFO [main] recover_object_main(906) object recovery
progress 95%
Jul 07 12:10:41 INFO [main] recover_object_main(906) object recovery
progress 96%
Jul 07 12:10:41 INFO [main] recover_object_main(906) object recovery
progress 97%
Jul 07 12:10:41 INFO [main] recover_object_main(906) object recovery
progress 98%
Jul 07 12:10:42 INFO [main] recover_object_main(906) object recovery
progress 99%
Jul 07 12:10:42 INFO [main] recover_object_main(906) object recovery
progress 100%
[4] 12:24:45 [SUCCESS] test007
Jul 04 18:19:08 INFO [main] zk_leave(985) leaving from cluster
Jul 07 12:10:34 INFO [main] md_add_disk(343) /mnt/sheep/0, vdisk nr 220,
total disk 1
Jul 07 12:10:34 NOTICE [main] get_local_addr(519) found IPv4 address
Jul 07 12:10:34 INFO [main] send_join_request(828) IPv4 ip:192.168.10.7
port:7000
Jul 07 12:10:35 NOTICE [main] nfs_init(600) nfs server service is not
compiled
Jul 07 12:10:35 INFO [main] check_host_env(493) Allowed open files
1024000, suggested 6144000
Jul 07 12:10:35 INFO [main] main(942) sheepdog daemon (version
0.8.0_223_ge4735ba) started
This is a testing cluster with 4 nodes, Sheepdog daemon version
0.8.0_223_ge4735ba, and zookeeper.
What may cause this?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.wpkg.org/pipermail/sheepdog-users/attachments/20140707/a6b8d0e6/attachment-0004.html>
More information about the sheepdog-users
mailing list