<html><body><div style="font-family: arial, helvetica, sans-serif; font-size: 12pt; color: #000000"><div>Hi everyone,</div><div><br data-mce-bogus="1"></div><div>I finished installing my new Sheepdog cluster based on 0.9.2 and I'm really intrigued by the bad performances I get, maybe someone can help me in tweaking my settings in order to get better results because I really don't see why this is that bad compared to other clusters I have.</div><div>I'm sorry, this post is big but I have no other way to explain clearly what's going on so I would say sorry for the inconvenience and thank you if you take the time to go through all this.</div><div><br data-mce-bogus="1"></div><div>First, here are 3 different configurations, my latest is the 3rd one and the one that is puzzling me :</div><div><br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div><div>First : "The Good" ... this one is an "in betwwen" with "the bad" and "the ugly" ... This is a 3 nodes cluster with 1 SSD drive where I put Sheepdog's object cache and journal and 1 SATA drive where one LV is dedicated to Sheepdog.<br></div><div>The cluster is made on only one interface for both cluster communication and data replication (1Gbps)</div><div>* Running Sheepdog : 0.9.1</div><div>* Cluster format : -c 2</div><div>* Command line : /usr/sbin/sheep -n -w size=100G dir=/mnt/metasheep --pidfile /var/run/sheep.pid /var/lib/sheepdog -j dir=/var/lib/sheepdog/journal size=1024M</div><div>* Mount point : </div><div> /dev/vg0/sheepdog on /var/lib/sheepdog type xfs (rw,noatime,nodiratime,attr2,delaylog,noquota) <====== this is the SATA drive</div><div> /dev/sdb1 on /mnt/metasheep type xfs (rw,noatime,nodiratime,attr2,delaylog,noquota) <====== this is the SSD drive</div><div> /var/lib/sheepdog/journal --> /mnt/metasheep/journal <====== this is a symlink on a folder on the SSD drive</div><div>* hdparm -tT /dev/sda (SATA drive) :</div><div> /dev/sda:<br> Timing cached reads: 25080 MB in 2.00 seconds = 12552.03 MB/sec<br> Timing buffered disk reads: 514 MB in 3.01 seconds = 170.95 MB/sec</div><div><div style="color: #000000; font-family: arial, helvetica, sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px;" data-mce-style="color: #000000; font-family: arial, helvetica, sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px;">* hdparm -tT /dev/sdb (SSD drive) :</div></div><div> /dev/sdb:<br> Timing cached reads: 19100 MB in 2.00 seconds = 9556.21 MB/sec<br> Timing buffered disk reads: 2324 MB in 3.00 seconds = 774.12 MB/sec</div><div><br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div><div>Second : "The Bad" ... this one is very simple, this is a 2 node cluster with a dedicated 250Gb partition on a SATA2 drive to Sheepdog.</div><div><span style="color: #000000; font-family: arial, helvetica, sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline !important; float: none; background-color: #ffffff;" data-mce-style="color: #000000; font-family: arial, helvetica, sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline !important; float: none; background-color: #ffffff;">The cluster is made on only one interface for both cluster communication and data replication (1Gbps)</span></div><div>* Running Sheepdog : 0.9.2_rc0</div><div>* Cluster format : -c 2</div><div> </div><div>* Command line : /usr/sbin/sheep -n --pidfile /var/run/sheep.pid /var/lib/sheepdog/</div><div>* Mount point : </div><div> /dev/sdb1 on /var/lib/sheepdog type xfs (rw,noatime,nodiratime,attr2,delaylog,noquota)</div><div>* hdparm -tT /dev/sdb : </div><div> /dev/sdb:<br></div><div> Timing cached reads: 16488 MB in 2.00 seconds = 8251.57 MB/sec<br> Timing buffered disk reads: 424 MB in 3.01 seconds = 140.85 MB/sec</div><div><br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div><div>Third : "The Ugly" ... this is my latest attempt and it should normally be my most powerful settings but, I have pretty bad results I'll detail later on ... right now, here is the configuration.</div><div>This is a 8 nodes cluster with dedicated interfaces for cluster communication (eth0) and data replication (eth1), both 1Gbps, Jumbo frames enabled on eth1 (MTU 9000).</div><div>Each node has 1 SSD drive dedicated to Sheepdog's object cache and 3 600Gb SAS 15k dedicated drives for Sheepdog's storage. Journal is not enabled on your recommandation (and to be honest, I had a bad crash of the Journal during my test drives).</div><div>Drives are handled individually in MD mode and the cluster uses Erasure Code.</div><div>* Running Sheepdog : 0.9.2_rc0<br>* Cluster format : -c 4:2<br>* Command line : /usr/sbin/sheep -n -w size=26G dir=/mnt/metasheep -i host=172.16.0.101 port=7002 -y 10.1.0.101 --pidfile /var/run/sheep.pid /var/lib/sheepdog /var/lib/sheepdog/disc0,/var/lib/sheepdog/disc1,/var/lib/sheepdog/disc2<br>* Mount point : <br></div><div> /dev/pve/metasheep on /mnt/metasheep type ext4 (rw,noatime,errors=remount-ro,barrier=0,nobh,data=writeback) <===== this is a LV on the SSD drive</div><div> /dev/sdb on /var/lib/sheepdog/disc0 type xfs (rw,noatime,nodiratime,attr2,delaylog,logbufs=8,logbsize=256k,noquota) <===== this is a SAS 15k drive<br> /dev/sdc on /var/lib/sheepdog/disc1 type xfs (rw,noatime,nodiratime,attr2,delaylog,logbufs=8,logbsize=256k,noquota) <span style="color: #000000; font-family: arial, helvetica, sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline !important; float: none; background-color: #ffffff;" data-mce-style="color: #000000; font-family: arial, helvetica, sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline !important; float: none; background-color: #ffffff;"><===== this is a SAS 15k drive</span><br> /dev/sdd on /var/lib/sheepdog/disc2 type xfs (rw,noatime,nodiratime,attr2,delaylog,logbufs=8,logbsize=256k,noquota) <span style="color: #000000; font-family: arial, helvetica, sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline !important; float: none; background-color: #ffffff;" data-mce-style="color: #000000; font-family: arial, helvetica, sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline !important; float: none; background-color: #ffffff;"><===== this is a SAS 15k drive</span><br></div><div>* hdparm -tT /dev/sd{b,c,d} (SAS drive) :<br></div><div> /dev/sd{b,c,d}:<br> Timing cached reads: 14966 MB in 2.00 seconds = 7490.33 MB/sec<br> Timing buffered disk reads: 588 MB in 3.00 seconds = 195.96 MB/sec<br>* hdparm -tT /dev/sde (SSD drive) :<br></div><div> /dev/sde:<br></div><div> Timing cached reads: 14790 MB in 2.00 seconds = 7402.49 MB/sec<br> Timing buffered disk reads: 806 MB in 3.00 seconds = 268.28 MB/sec</div><div><br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div><div>So, as you can see, we have 3 pretty different configurations and "the ugly" would appear to give the best results but to be honest, that is not the case.</div><div>I have currently very few virtual machines on this last cluster, every VM I tested are based on the same setup (Debian Wheezy based, virtio-scsi drive with discard and writeback cache enabled).</div><div>I have a "virtual desktop" VM with a few tools like Eclipse, Firefox, LibreOffice, .... so big pieces of software which needs a bit of good i/o to run smoothly.</div><div>On "the ugly", on cold start, Eclipse is quite long to start with heavy i/o and cpu usage, so does Firefox and the others.</div><div>Close the apps and re-run them a few times, at each new start, it is a bit faster and smoother ... to me, that looks like the object cache on the SSD drive is doing its job but you need to keep the VM on the same node and keep it running for a while before getting any benefit from this mechanism. And this cache is pretty light (26GB in that case) so with many VMs, this won't help much by the time beeing.</div><div>On the other configurations, everything is much smoother from the cold start.</div><div><br data-mce-bogus="1"></div><div>I know hdparm is not a good benchmark but I did 3 runs passes on my different configurations, within the Virtual Machine, here are the results :</div><div><br data-mce-bogus="1"></div><div>"The Good" :</div><div> /dev/sda:<br> Timing cached reads: 20526 MB in 2.00 seconds = 10275.86 MB/sec<br> Timing buffered disk reads: 660 MB in 3.00 seconds = 219.85 MB/sec<br> /dev/sda:<br> Timing cached reads: 20148 MB in 2.00 seconds = 10087.56 MB/sec<br> Timing buffered disk reads: 2600 MB in 3.00 seconds = 865.85 MB/sec<br></div><div> /dev/sda:<br></div><div> Timing cached reads: 20860 MB in 2.00 seconds = 10443.10 MB/sec<br> Timing buffered disk reads: 3148 MB in 3.18 seconds = 989.06 MB/sec<br></div><div><br data-mce-bogus="1"></div><div>Conclusion : We can see that results are getting better for each pass (certainly due to the object cache mechanism) but the first pass still gives good results (even better than the physical underlying drive).</div><div><br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div><div>"The Bad" :</div><div> /dev/sda:<br> Timing cached reads: 13796 MB in 2.00 seconds = 6906.54 MB/sec<br> Timing buffered disk reads: 1084 MB in 3.01 seconds = 360.02 MB/sec<br> /dev/sda:<br> Timing cached reads: 13258 MB in 2.00 seconds = 6639.34 MB/sec<br> Timing buffered disk reads: 1218 MB in 3.01 seconds = 405.30 MB/sec<br> /dev/sda:<br> Timing cached reads: 12852 MB in 2.00 seconds = 6433.39 MB/sec<br> Timing buffered disk reads: 1306 MB in 3.00 seconds = 435.15 MB/sec</div><div><br data-mce-bogus="1"></div><div>Conclusion : This is bluffing ! Simple replication, bad disks, no object cache ... worst case scenario and ... extremely good performances ... I don't understand, this can't be !</div><div><br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div><div>"The Ugly" :</div><div> /dev/sda:<br> Timing cached reads: 13762 MB in 2.00 seconds = 6886.51 MB/sec<br> Timing buffered disk reads: 90 MB in 3.03 seconds = 29.73 MB/sec<br> /dev/sda:<br> Timing cached reads: 13828 MB in 2.00 seconds = 6919.47 MB/sec<br> Timing buffered disk reads: 308 MB in 3.00 seconds = 102.58 MB/sec<br> /dev/sda:<br> Timing cached reads: 14202 MB in 2.00 seconds = 7106.75 MB/sec<br> Timing buffered disk reads: 1580 MB in 3.00 seconds = 526.62 MB/sec</div><div><br data-mce-bogus="1"></div><div>Conclusion : Cold start with empty cache is just horrible ... 30MB/sec !!! After a few runs, I guess Object Cache on SSD is doing its job and I'm up to more than 500MB/sec after 3 runs and I'm caping at about 750MB/sec after a few more runs but first pass is horrible and guess what ... I'm getting this bad results as soon as I'm migrating the VM on another node since the cache is empty then ... </div><div><br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div><div>It really puzzles me ... I should get extremely good results on this setup and this is in fact the worst !</div><div>What did I do wrong ?</div><div>How can I trace what's going on ?</div><div>What can I try ?</div><div><br></div><div>I really need help because I can't put this in Production with results that are far worst than what I tried on a quick and dirty lab test on 2 mainstream PCs :(</div><div><br data-mce-bogus="1"></div><div>Thanks in advance for any help you can provide.</div><div><br data-mce-bogus="1"></div><div>Best regards.</div><div><br data-mce-bogus="1"></div><div>Walid</div></div></body></html>