[sheepdog-users] Suggestion on how to expand the cluster

Valerio Pachera sirio81 at gmail.com
Tue Apr 16 10:50:31 CEST 2013


There might have been problem since before that I dodn't notice because I
have lot's of
Mar 24 08:19:53 [gway 32321] wait_forward_request(166) poll timeout 1
Mar 24 08:19:53 [gway 32406] wait_forward_request(166) poll timeout 1
Mar 24 08:19:53 [gway 32287] wait_forward_request(166) poll timeout 1
Mar 26 08:34:38 [gway 25521] wait_forward_request(166) poll timeout 1
Mar 26 08:34:38 [gway 25517] wait_forward_request(166) poll timeout 1

on the node "sheepdog004" (id 2) since 26 Mar.
Collie vdi list was not giving error so I didn't look or noticed that data
was not distributed on the nodes.

root at sheepdog001:~# collie node info
Id      Size    Used    Use%
 0      1.8 TB  7.0 GB    0%
 1      1.8 TB  6.3 GB    0%
 2      466 GB  221 GB   47%
Failed to read object 80a34c6700000000 No object found
Failed to read inode header
Failed to read object 80a34c6800000000 No object found
Failed to read inode header
Failed to read object 80a34c6900000000 No object found
Failed to read inode header
Failed to read object 80a34c6a00000000 No object found
Failed to read inode header
Failed to read object 80a34c6d00000000 No object found
Failed to read inode header
Failed to read object 80a34c7100000000 No object found
Failed to read inode header
Total   4.1 TB  235 GB    5%

Total virtual image size        380 GB



2013/4/16 Valerio Pachera <sirio81 at gmail.com>

> I stopped the cluster.
> I added 1 disk of 2T on node "sheepdog001" and "sheepdog002".
> (I don't have a third disk for the third  node right now).
> Restarted the cluster on the "original" disks and checked that everything
> was fine.
> Then I hot added the new disks.
>
> root at sheepdog002:~# collie node md info
> Id      Size    Use     Path
> root at sheepdog002:~# collie node md plug /mnt/ST2000DM001-1CH164_W1E2N5GM/
> root at sheepdog002:~# collie node md info
> Id      Size    Use     Path
>  0      1.8 TB  40 MB   /mnt/ST2000DM001-1CH164_W1E2N5GM/
>
>
>
> Cluster seems fine but
> root at sheepdog001:~# collie vdi list
>   Name        Id    Size    Used  Shared    Creation time   VDI id
> Copies  Tag
>   zimbra_backup     0  100 GB  100 GB  0.0 MB 2013-03-22 09:20
> 2e519     2
>   backup       0   10 GB  2.3 GB  0.0 MB 2013-03-28 12:23   19093f
> 2
>   test         0  100 MB  100 MB  0.0 MB 2013-03-22 08:21   7c2b25
> 2
>   wheezy       0   10 GB  1.8 GB  0.0 MB 2013-03-20 17:02   9533ed
> 2
> Failed to read object 80a34c6700000000 No object found
> Failed to read inode header
> Failed to read object 80a34c6800000000 No object found
> Failed to read inode header
> Failed to read object 80a34c6900000000 No object found
> Failed to read inode header
> Failed to read object 80a34c6a00000000 No object found
> Failed to read inode header
> Failed to read object 80a34c6d00000000 No object found
> Failed to read inode header
> Failed to read object 80a34c7100000000 No object found
> Failed to read inode header
>   squeeze      0   10 GB  292 MB  1.5 GB 2013-03-21 11:21   a34c73
> 2
>   backup_data     0  200 GB  193 GB  0.0 MB 2013-03-28 12:48   c8d128
> 2
>   crmdelta     0   50 GB   50 GB  0.0 MB 2013-04-03 09:15   e149bf     2
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.wpkg.org/pipermail/sheepdog-users/attachments/20130416/fabcd578/attachment-0005.html>


More information about the sheepdog-users mailing list