<br><br>
<div class="gmail_quote">On Thu, May 3, 2012 at 3:35 PM, MORITA Kazutaka <span dir="ltr"><<a href="mailto:morita.kazutaka@gmail.com" target="_blank">morita.kazutaka@gmail.com</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PADDING-LEFT:1ex" class="gmail_quote">At Thu, 3 May 2012 10:02:38 +0800,<br>
<div class="im">HaiTing Yao wrote:<br>><br>> On Thu, May 3, 2012 at 3:37 AM, MORITA Kazutaka<br>> <<a href="mailto:morita.kazutaka@gmail.com">morita.kazutaka@gmail.com</a>>wrote:<br>><br>> > At Wed, 2 May 2012 15:12:49 +0800,<br>
> > <a href="mailto:yaohaiting.wujue@gmail.com">yaohaiting.wujue@gmail.com</a> wrote:<br>> > ><br>> > > From: HaiTing Yao <<a href="mailto:wujue.yht@taobao.com">wujue.yht@taobao.com</a>><br>> > ><br>
> > > Sometimes we need node can be back in a while.<br>> > ><br>> > > When we need this:<br>> > ><br>> > > 1, restart sheepdog daemon for ugrade or other purpose<br>> > ><br>
> > > 2, the corosync driver lose its token for a short while<br>> ><br>> > This is a corosync specific problem, and should be handled by changing<br>> > parameters in corosync.conf, I think.<br>
> ><br>><br>> For cluster storage, storage system should deal with temporary node or<br>> network failures. It can not assume the cluster is always stable. Changing<br>> parameter of corosync can not eliminate the temporay node failue because of<br>
> some protocol reasons. I am not sure zookeeper and other drivers have same<br>> problems, but zookeeper also has the timeout that zookeeper server can not<br>> commnunicate with the node. I think it alos can not avoid the problem on<br>
> some conditions.<br>><br>> I tried to implement the similar solution with Amzon Dynamo for temporary<br>> node or network failures. Perhaps I should keep the hinted handoff of<br>> failed node on the VM hosted node, so I reused the object cache to keep<br>
> the hinted handoff. With the cache, the I/O will not be blocked.<br><br></div>If qemu uses cache=writethough, the I/O will be blocked. Note that<br>the requested consistency level for Sheepdog is quite different from<br>
one for Dynamo.<br>
<div class="im"> </div></blockquote>
<div> </div>
<div>Yes, the I/O will be blocked without cache, but the block is not the fatal problem. </div>
<div> </div>
<div>With wright through, I can use the object cache to keep the hinted handoff on VM hosted node and does not block I/O at all. If the temporary failed node comes back, then copy the hinted handoff to the node. This can be accomplished within days. </div>
<div> </div>
<div>Some objects will lose one replication if the object is also a local request. Perhaps the lose of replication is not fatal, because keeping the strict copies is difficult for our nodes management. If we choose one replacing node for the failed node to keep the strict copies, we can not deal with the replacing node failing again without center node and versions of object data.</div>
<div> </div>
<div>The multi-cast policy of corosync can not promise without token lost. The token lost usually leads to network partition and whole cluster can not be used anymore. Tuning corosync can not solve token lost problem, so sheepdog must face this problem.</div>
<div> </div>
<div>I can get rid of the I/O block, but firstly we must make it clear do we need this kind of failure detection. </div>
<div> </div>
<div> </div>
<blockquote style="BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PADDING-LEFT:1ex" class="gmail_quote">
<div class="im">><br>><br>> ><br>> > So I think the main benefit of this patchset is to allow us to restart<br>> > sheep daemons without changing node membership, but what's the reason<br>> > you want to avoid temporal membership changes? Sheepdog blocks write<br>
> > I/Os when it cannot create full replicas, so basically we should<br>> > remove the failed nodes from node membership ASAP.<br>> ><br>><br>> Restarting the daemon will lead to two times of data recovery. If we<br>
> upgrade the cluster with much data, the lazy repair is useful.<br><br></div>It is definitely necessary to delay object recovery to avoid an extra<br>data copy against transient failure. However, it doesn't look a good<br>
idea to delay changing node membership which is used for deciding<br>object placement.<br><br>Actually, Sheepdog already handles transient node failure gracefully.<br>For example,<br><br> Epoch Nodes<br> 1 A, B, C, D<br>
2 A, B, C <- node D fails temporally<br> 3 A, B, C, D<br><br>If object recovery doesn't run at epoch 2, there is no object move<br>between nodes. I know how to handle transient network partition is a<br>
challenging problem with the current implementation, but I'd like to<br>see another approach which doesn't block I/Os for a long time.<br></blockquote>
<div> </div>
<div>From my test, the recovery has began running when epoch 3 comes usually.</div>
<div> </div>
<blockquote style="BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PADDING-LEFT:1ex" class="gmail_quote"><br>If it is confusing to show frequent node membership changes to users,<br>how about managing two node lists? One node list is used internally<br>
for consistent hashing, and the other one is shown to administrators<br>and doesn't change rapidly.<br><br></blockquote>
<div> </div>
<div>I do not think the frequent membership changes will give user much confusion. I just want to avoid the transient failure leading to network partition and unnecessary data recovery.</div>
<div> </div>
<div>Thanks</div>
<div>Haiti</div>
<div> </div>
<blockquote style="BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PADDING-LEFT:1ex" class="gmail_quote">Thanks,<br><br>Kazutaka<br>
<div class="HOEnZb">
<div class="h5"><br>><br>> When we format the cluster, we can specify the temorary failure detection<br>> on/off. When it is on, there is an optional lazy reparr for eager repair.<br>><br>> Thanks<br>> Haiti<br>
><br>> ><br>> > Thanks,<br>> ><br>> > Kazutaka<br></div></div></blockquote></div><br>