On Tue, Aug 13, 2013 at 08:27:02AM +0200, Valerio Pachera wrote: > # collie node md info --all > Id Size Used Avail Use% Path > Node 0: > 0 465 GB 313 GB 153 GB 67% /mnt/sheep/dsk02 > 1 1.8 TB 1.4 TB 402 GB 78% /mnt/sheep/dsk03 > Node 1: > 0 166 GB 92 GB 74 GB 55% /mnt/sheep/dsk01/obj > 1 465 GB 328 GB 138 GB 70% /mnt/sheep/dsk02 > 2 1.8 TB 1.3 TB 555 GB 70% /mnt/sheep/dsk03 > Node 2: > 0 2.7 TB 2.1 TB 689 GB 75% /mnt/sheep/dsk02 > Node 3: > 0 465 GB 236 GB 229 GB 50% /mnt/sheep/dsk03 > 1 1.8 TB 1.5 TB 314 GB 83% /mnt/sheep/dsk04 > 1.5T, 1.3T, 1.4T looks kind of evenly distributed to me. > > Node 0,1,2 have a "small" disk that I'm using in the cluster. > Till now, I had to remove it from node 0 and node 2. > > Now I can see that node 3 has the big disk getting full! > If it continues like that, it's going to be a problem because I can't > unplug the disk (not enough space on the same node). > I should kill the node. > > Any Idea? Can you check .stale in each disk is empty? If no, you can empty the .stale manually if cluster isn't in recovery. Thanks Yuan |