[Sheepdog] Remove a cluster
Valerio Pachera
sirio81 at gmail.com
Sun Aug 14 14:56:22 CEST 2011
Hi all, because I wrote a worng ip in corosync.conf and created the
cluster with that wrong ip, I wanted to remove the cluster and create
it form scratch.
I also messed up because I run 'collie cluster format --copies=2' on both nodes!
This was the situation:
SERVER2
# collie cluster info
The node had failed to join sheepdog
Ctime Epoch Nodes
2011-08-14 02:26:35 1 [192.168.100.1:7000]
SERVER1
# collie node info
The node had failed to join sheepdog
failed to get node list
# collie cluster shutdown
The node had failed to join sheepdog
failed to get node list
I tried to kill sheep and corosync but, when I was restarting them,
the situation was the same.
I was wondering were the information were stored and I saw they were
sipmly in the sheepdog directory (the one that is going to store the
data).
So I did this:
BOTH SERVER
# pkill sheep
# /etc/init.d/corosync stop
I unmounted the /mnt/sheepdog and format it to zero all the information on it.
# mkfs.xfs -L sheepdog /dev/sdb6
Remounted and restarted corosync and sheep.
ONLY ON ONE OF THE NODE
collie cluster format --copies=2
Now I get
# collie cluster info
running
Ctime Epoch Nodes
2011-08-14 14:35:21 1 [192.168.100.1:7000, 192.168.100.2:7000]
:-)
Is there another way to 'clear' a cluster?
More information about the sheepdog
mailing list