<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">I’m hoping someone can shed some light on some pretty startling performance deltas with different clustering backend in Sheepdog. <div><br></div><div>We are running a 20 node, 320 spindle (280 for data storage, 40 ssd for meta/os) sheepdog cluster backed with a 40GB/s IB network.</div><div><br></div><div>In a VM running under Openstack Havana using the root/ephemeral volume driver (<a href="https://github.com/devoid/nova/tree/sheepdog-nova-support-havana">https://github.com/devoid/nova/tree/sheepdog-nova-support-havana</a>) we are seeing the following performance characteristics under 0.8.0 sheepdog, test results were obtained using: iozone -i0 -i1 -t 1 -r 128#k -s 10G (-t 4 for parallel read numbers)</div><div><br></div><div>Running with corosync 2.3.3\RDMA replica 3, single VM: 600MB/s on write, with about 132MB/s on read with a maximum read of 520MB/s running four read threads in parallel. </div><div><br></div><div>Running with zookeeper 3.4.5+dfsg-1 replica 3, single VM: 40MB/s on write, with about 40MB/s on read and a maximum read of 90MB/s running four read threads in parallel.</div><div><br></div><div>In both cases, multiple write threads had little/no impact on overall write performance.</div><div><br></div><div>Note that sheepd is spawned identically in each case, except for the references to each respective cluster backend. Have others experienced similar performance numbers with zookeeper/corosync? Are there any standard zookeeper configuration changes you make out of the box? We are taking the defaults for zookeeper package installation at this time.</div><div><br></div><div><br></div><div>-ryan</div><div><br></div><div><br></div><div><br></div></body></html>