[sheepdog-users] cache flush: all or nothing

Liu Yuan namei.unix at gmail.com
Thu Jun 6 17:04:01 CEST 2013

On 06/06/2013 09:51 PM, Valerio Pachera wrote:
> 2013/6/6 Liu Yuan <namei.unix at gmail.com>:
>> So your last mail meant that you couldn't reproduce the long wait time
>> problem, right?
> No.
> This is a scale test (only 4G written).
> After 30 minutes the guest was doing nothing, it took 30 seconds to
> flush the cache by 'vdi cache flush' and 1 minutes and 28 seconds to
> shutdown.
> I aspected:
> 1-after waiting 30 minutes 'vdi cache flush' to be immediate or almost;
> 2-after 'vdi cache flush', in any case, the guest should have nothing
> to flush shutting down, so I don't understand why it takes 1 minutes
> and 28 seconds.

I can't reproduce the problem even with ext3.

So I guess something wrong with your QEMU. What is the version? You can
add '-d' to sheep and grep 'FLUSH' to see if the FLUSH request is issued

After I dd a 8G data (without manual call 'sync' inside VM), just wait
for several minutes, I can shutdown the VM instantly.

Or did you mount ext3 with noatime and write a existing file? Just try
dd a new file, see what happens

> Remember that I saw almost no network activity on the node after I've
> been writing data to the guest (max 100kbit/s).

So FLUSH wasn't issued from VM

> If it was flushing the cache I aspect to see some Mbit/s of traffic.
>> As above mentioned, we can mitigate this problem by
>> introducing background pusher, that is flushing while writing.
> I think it's good addition but right now
> What I was not able to reproduce was this error:
> root at test004:~# time collie vdi cache flush data
> Jun 06 11:36:30 [main] do_read(297) failed to read from socket: -1,
> Resource temporarily unavailable
> Jun 06 11:36:30 [main] exec_req(405) failed to read a response
> failed to execute request

I noticed that your fd limit is very small (1024), please enlarge it as
suggested in the log. This will cause EAGAIN error sometimes and fatal
to sheepdog.

Jun 05 17:15:12 [main] check_host_env(395) WARN: Allowed open files
1024 too small, suggested 1024000

> I saw that also on my production cluster once, flushing cache after
> killing a guest.
> Not only, I aspect that, each time I write 2048 Mbyte of data on my
> guest and run 'vdi cache flush' right after, it should take time to do
> it (as wrote before), but after 2-3 time I repeated the sequence write
> (dd one the guest) and vdi cache flush,  it was takeing few seconds or
> even immediate.

Why you use 'collie vdi cache flush', just 'sync' inside VM isn't enough
for you?

> But when shutting down the guest...I had to wait long (2 minutes or more).

Looks strange to me, it shouldn't happen. I can't reproduce the problem.
while you are shutdowning VM, can you notice network traffic go high? If
not, no objects are being flushed.


More information about the sheepdog-users mailing list