<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: arial,helvetica,sans-serif; font-size: 10pt; color: #000000'>Depending on my VM usage (most reads or most writes), I use writethrough (for mostly reads) or writeback (for mostly writes) ... but apparently, if I understood well, without object cache, this has no effect ... am I wrong ?<div><br></div><div>Regards,</div><div><br></div><div>Walid<br><br><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>De: </b>"飞" <duron800@qq.com><br><b>À: </b>"Walid Moghrabi" <walid.moghrabi@lezard-visuel.com><br><b>Envoyé: </b>Samedi 3 Mai 2014 09:21:31<br><b>Objet: </b>回复: [sheepdog-users]回复: sheepdog vs ceph question<br><br><div>hello, in your test ENV, do you set the cache of qemu vm disk's to writebak or writethrough or none ?</div><div><div><br></div><div style="font-size: 12px;font-family: Arial Narrow;padding:2px 0 2px 0;">------------------ 原始邮件 ------------------</div><div style="font-size: 12px;background:#efefef;padding:8px;"><div><b>发件人:</b> "Walid Moghrabi";<walid.moghrabi@lezard-visuel.com>;</div><div><b>发送时间:</b> 2014年4月29日(星期二) 晚上11:05</div><div><b>收件人:</b> "sheepdog-users"<sheepdog-users@lists.wpkg.org>; </div><div></div><div><b>主题:</b> Re: [sheepdog-users]回复: sheepdog vs ceph question</div></div><div><br></div><style>p { margin: 0;}
</style><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: #000000">1) I use XFS as it is the recommended FS I used to see about everywhere ... I personnaly have a slight preference for Ext4 but since I read many post preferring XFS for this usage, I just followed the mass. Here is my format options : mkfs.xfs -f -i size=512 /dev/sdxxx<div>2) No I don't, I have very good performances without it and I'm quite happy that way and it eases main operations such as backups, converts or migrations since you don't have to flush the cache before.</div><div>3) since I don't use object cache, I would say no.</div><div>4) no, I didn't tried this, sounds interesting</div><div>5) Not really, here are a few quick tests but this is no real benchmark, just to show you on a test lab setup we have here :</div><div><br></div><div>Setup :</div><div>2 nodes ProxMox cluster up to date (3.2) :</div><div>kernel version : 2.6.32-28-pve</div><div>QEMU version : 1.7.1</div><div>Sheepdog version : 0.8.0</div><div><br></div><div>Hardware wise, each node is running with one Core I7 860 (2.8Ghz) CPU (4 cores, Hyperthreading), 8Gb RAM and 2 500Gb SATA hard drives (no RAID), 1x1Gb NIC Ethernet</div><div>System and applications are on the first hard drive, second hard drive is dedicated fully to sheepdog for VDI storage.</div><div>Sheepdog cluster is formated with -c 2 number of copies (so it is pure replication in this setup) and sheep daemon is launched with these options : /usr/sbin/sheep -n --pidfile /var/run/sheep.pid /var/lib/sheepdog/</div><div>No dedicated journal also.</div><div><br></div><div>On the node itself</div><div>=============</div><div><br></div><div>Here is what I get from the disk dedicated to Sheepdog :</div><div><br></div><div>hdparm -tT /dev/sdb : </div><div><div><span style="font-size: 10pt;"> Timing cached reads: 16952 MB in 2.00 seconds = 8484.14 MB/sec</span></div><div> Timing buffered disk reads: 376 MB in 3.00 seconds = 125.23 MB/sec</div></div><div><br></div><div><br></div><div>Now, from a VM</div><div>============</div><div><br></div><div>hdparm -tT /dev/vda :</div><div><div><span style="font-size: 10pt;"> Timing cached reads: 13110 MB in 2.00 seconds = 6562.95 MB/sec</span></div><div> Timing buffered disk reads: 636 MB in 3.08 seconds = 206.70 MB/sec</div><div><br></div></div><div><br></div><div><div>dd if=/dev/zero of=/tmp/test bs=1M count=1024</div><div>1024+0 records in</div><div>1024+0 records out</div><div>1073741824 bytes (1.1 GB) copied, 9.6222 s, 112 MB/s</div></div><div><br></div><div><br></div><div><div>dd if=/dev/zero of=/tmp/test bs=1M count=1024 oflag=direct</div><div>1024+0 records in</div><div>1024+0 records out</div><div>1073741824 bytes (1.1 GB) copied, 11.9081 s, 90.2 MB/s</div></div><div><br></div><div><div>dd if=/dev/zero of=/tmp/test bs=1M count=1024 oflag=sync</div><div>1024+0 records in</div><div>1024+0 records out</div><div>1073741824 bytes (1.1 GB) copied, 16.4097 s, 65.4 MB/s</div></div><div><br></div><div>As you can see, concerning this very basic setup, performances are pretty honest and using it daily, it is very responsive and working great.</div><div><br></div><div>Regards,</div><div><br></div><div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>De: </b>"飞" <duron800@qq.com><br><b>À: </b>"Walid Moghrabi" <walid.moghrabi@lezard-visuel.com><br><b>Envoyé: </b>Lundi 28 Avril 2014 10:21:42<br><b>Objet: </b>回复: [sheepdog-users] sheepdog vs ceph question<br><br><div>Hi <span style="background-color: rgb(239, 239, 239); font-size: 12px; line-height: 18px;">Walid</span><span style="line-height: 1.5;">, thank you very much for your reply, I have some qeustion:<br>1. what FS do you use ? xfs of ext4 ?<br>2. do you use object cache ? if so,what size of you cache size ?<br>3. do you use dedicated journal disk ? and do you have drop the O_SYNC for the backend storage ?<br>4. do you use </span>Erasure Code<span style="line-height: 1.5;"> for your cluster/vdi ? </span></div><div><div>5. do you have a performace report with qemu? I would like to read it.</div><div><br></div><div style="font-size: 12px;font-family: Arial Narrow;padding:2px 0 2px 0;">------------------ 原始邮件 ------------------</div><div style="font-size: 12px;background:#efefef;padding:8px;"><div><b>发件人:</b> "Walid Moghrabi";<walid.moghrabi@lezard-visuel.com>;</div><div><b>发送时间:</b> 2014年4月28日(星期一) 下午3:26</div><div><b>收件人:</b> "sheepdog-users"<sheepdog-users@lists.wpkg.org>; </div><div></div><div><b>主题:</b> Re: [sheepdog-users] sheepdog vs ceph question</div></div><div><br></div><style>p { margin: 0;}
</style><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: #000000">Hi,<div><br></div><div>Well, performance wise, I didn't see much differences betwwen both but Sheepdog is by far easier to configure, use and maintain.</div><div>I also see a big difference in performance while doing a recover from a failing node.</div><div>It might certainly be fine tuned but with default settings, node recovery with Ceph is extremely ressource hungry and if your network and ios are not properly separated, it does impact performances a lot (but it recovers faster).</div><div>Sheepdog on the other side is really great when it comes to recovery, impacts are very low and even on common hardware, and even without splitting networks, you can use your VMs with a slight performance penalty but this is really not a big problem.</div><div><br></div><div>Personnaly, I use both and I have a slight preference on Sheepdog.</div><div><br></div><div><br><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>De: </b>"飞" <duron800@qq.com><br><b>À: </b>"sheepdog-users" <sheepdog-users@lists.wpkg.org><br><b>Envoyé: </b>Lundi 28 Avril 2014 08:00:06<br><b>Objet: </b>[sheepdog-users] sheepdog vs ceph question<br><br>hello, I want to using sheepdog and ceph for qemu,<br>Do you have report on the performance test comparing the sheepdog and ceph ? thank you.<br>by the way, is Taobao.Inc use sheepdog for production ?<br>-- <br>sheepdog-users mailing lists<br>sheepdog-users@lists.wpkg.org<br>http://lists.wpkg.org/mailman/listinfo/sheepdog-users<br></div><br></div></div></div></div><br></div></div></div></div><br></div></div></body></html>