On Thu, Jul 18, 2013 at 11:43:00AM +0200, Valerio Pachera wrote: > 2013/7/18 Liu Yuan <namei.unix at gmail.com>: > >> Is it normal that /mnt/sheep/meta/cache/ is still full of data? > > No. I think the remained cache is something related to copy-on-write mechanism > > regards of snapshot. > > vdi list > s backup_data 3 1.0 TB 4.0 MB 501 GB 2013-07-12 12:40 c8d12a > 2 lug15 > s backup_data 4 1.0 TB 240 MB 501 GB 2013-07-15 10:04 c8d12b > 2 lug15bis > s backup_data 5 1.0 TB 2.1 GB 501 GB 2013-07-15 10:08 c8d12c > 2 lug17_144450 > s backup_data 6 1.0 TB 964 MB 503 GB 2013-07-17 14:55 c8d12d > 2 lug17_150000 > s backup_data 7 1.0 TB 5.9 GB 500 GB 2013-07-17 14:59 c8d12e > 2 > backup_data 0 1.0 TB 15 GB 505 GB 2013-07-17 15:15 c8d12f 2 > > root at test004:/mnt/sheep/meta/cache# du -sh * > 633M 19093f > 8,1G c8d128 > 8,0M c8d129 > 4,0M c8d12a > 229M c8d12b > 1,9G c8d12c > 61M c8d12d > 5,9G c8d12e > 17M c8d12f > > After removing the cache, by rm (with no guests running), the vdi > backup_data (1T) got corrupted. > I guess it was better by 'vdi cache delete -s tag'. > > vdi check of 'backup' (10G) and 'whezzy' (10G) fixed something but > seems fine (there were no snapshot for these vdi). > > Notice what happens when I delete backup_data (more than 200G used, 1T virtual). > > root at test004:/mnt/sheep/meta/cache# collie vdi list > Name Id Size Used Shared Creation time VDI id Copies Tag > backup 0 10 GB 3.1 GB 0.0 MB 2013-07-10 16:58 19093f > 2 > wheezy 0 10 GB 1.8 GB 0.0 MB 2013-07-16 15:08 9533ed > 2 > s backup_data 3 1.0 TB 4.0 MB 501 GB 2013-07-12 12:40 c8d12a > 2 lug15 > s backup_data 4 1.0 TB 236 MB 501 GB 2013-07-15 10:04 c8d12b > 2 lug15bis > s backup_data 5 1.0 TB 2.1 GB 501 GB 2013-07-15 10:08 c8d12c > 2 lug17_144450 > s backup_data 6 1.0 TB 964 MB 503 GB 2013-07-17 14:55 c8d12d > 2 lug17_150000 > s backup_data 7 1.0 TB 5.9 GB 500 GB 2013-07-17 14:59 c8d12e > 2 > backup_data 0 1.0 TB 15 GB 505 GB 2013-07-17 15:15 c8d12f > 2 > root at test004:/mnt/sheep/meta/cache# collie vdi delete -s 3 backup_data > root at test004:/mnt/sheep/meta/cache# collie vdi delete -s 4 backup_data > root at test004:/mnt/sheep/meta/cache# collie vdi delete -s 5 backup_data > root at test004:/mnt/sheep/meta/cache# collie vdi delete -s 6 backup_data > root at test004:/mnt/sheep/meta/cache# collie vdi delete -s 7 backup_data > > root at test004:/mnt/sheep/meta/cache# collie vdi delete backup_data > failed to read from socket: -1, Resource temporarily unavailable > failed to read a response > This means that deletion process takes too long and collie timeout. I'm writing a patch to have collie wait tight for ever (it should). Thanks Yuan |