2013/5/22 Liu Yuan <namei.unix at gmail.com>: > notice any situation that 'vdi check' saturate the network? Not exactly, but I saw eth0 running at 430M/s! I'm not using my test cluster now, I'm on the production cluster that is still running 0.5.5_335_g25a93bf. The situation is that: I killed a node (using md) and Insert it back (as you might read in a previous mail). Recovery started and finished (two guests were running). Later I noticed my guests were not working (qemu proccess using 100%cpu, not even vnc was working). I kill -9 them (I forgot to flush the cache first ... damn!) I tried to start them back, but they stop right after, complaining about dry I/O error. I run vdi check to "see" if cluster was intact. Right now I'm running vdi check on big disks (100G and 1T). Eth0 runs even over 400M/s sheepdog004 NET | eth0 37% | | pcki 312609 | pcko 51712 | | si 372 Mbps | so 36 Mbps sheepdog002 NET | eth0 22% | | pcki 194262 | pcko 189016 | | si 222 Mbps | so 185 Mbps Till now it didn't print any missing chunk. On sheepdog004 I'm checking 2 vdi's. On sheepdog002 I'm checking 1 vdi's. |