[Sheepdog] Remove a cluster

MORITA Kazutaka morita.kazutaka at lab.ntt.co.jp
Mon Aug 15 15:21:19 CEST 2011


At Sun, 14 Aug 2011 14:56:22 +0200,
Valerio Pachera wrote:
> 
> Hi all, because I wrote a worng ip in corosync.conf and created the
> cluster with that wrong ip, I wanted to remove the cluster and create
> it form scratch.
> I also messed up because I run 'collie cluster format --copies=2' on both nodes!

Actually, it is a bit confusing.  I've modified the wiki, thanks.

> 
> This was the situation:
> 
> 
> 
> 
> SERVER2
> # collie cluster info
> The node had failed to join sheepdog
> 
> Ctime                Epoch Nodes
> 2011-08-14 02:26:35      1 [192.168.100.1:7000]
> 
> 
> SERVER1
> # collie node info
> The node had failed to join sheepdog
> failed to get node list
> 
> # collie cluster shutdown
> The node had failed to join sheepdog
> failed to get node list
> 
> I tried to kill sheep and corosync but, when I was restarting them,
> the situation was the same.
> I was wondering were the information were stored and I saw they were
> sipmly in the sheepdog directory (the one that is going to store the
> data).
> So I did this:
> 
> BOTH SERVER
> # pkill sheep
> # /etc/init.d/corosync stop
> I unmounted the /mnt/sheepdog and format it to zero all the information on it.
> # mkfs.xfs -L sheepdog /dev/sdb6
> Remounted and restarted corosync and sheep.
> 
> ONLY ON ONE OF THE NODE
> collie cluster format --copies=2
> 
> Now I get
> 
> # collie cluster info
> running
> 
> Ctime                Epoch Nodes
> 2011-08-14 14:35:21      1 [192.168.100.1:7000, 192.168.100.2:7000]
> 
> :-)
> 
> Is there another way to 'clear' a cluster?

You don't need to run mkfs, but just run 'rm -r /mnt/sheepdog/*'.
Unfortunately, collie doesn't support cleaning the sheep directory.

Thanks,

Kazutaka



More information about the sheepdog mailing list