[sheepdog-users] sheepdog vs ceph question

Walid Moghrabi walid.moghrabi at lezard-visuel.com
Mon May 5 11:35:32 CEST 2014


Depending on my VM usage (most reads or most writes), I use writethrough (for mostly reads) or writeback (for mostly writes) ... but apparently, if I understood well, without object cache, this has no effect ... am I wrong ? 


Regards, 


Walid 

----- Mail original -----

De: "飞" <duron800 at qq.com> 
À: "Walid Moghrabi" <walid.moghrabi at lezard-visuel.com> 
Envoyé: Samedi 3 Mai 2014 09:21:31 
Objet: 回复: [sheepdog-users]回复: sheepdog vs ceph question 


hello, in your test ENV, do you set the cache of qemu vm disk's to writebak or writethrough or none ? 



------------------ 原始邮件 ------------------ 

发件人: "Walid Moghrabi";<walid.moghrabi at lezard-visuel.com>; 
发送时间: 2014年4月29日(星期二) 晚上11:05 
收件人: "sheepdog-users"<sheepdog-users at lists.wpkg.org>; 

主题: Re: [sheepdog-users]回复: sheepdog vs ceph question 


1) I use XFS as it is the recommended FS I used to see about everywhere ... I personnaly have a slight preference for Ext4 but since I read many post preferring XFS for this usage, I just followed the mass. Here is my format options : mkfs.xfs -f -i size=512 /dev/sdxxx 
2) No I don't, I have very good performances without it and I'm quite happy that way and it eases main operations such as backups, converts or migrations since you don't have to flush the cache before. 
3) since I don't use object cache, I would say no. 
4) no, I didn't tried this, sounds interesting 
5) Not really, here are a few quick tests but this is no real benchmark, just to show you on a test lab setup we have here : 


Setup : 
2 nodes ProxMox cluster up to date (3.2) : 
kernel version : 2.6.32-28-pve 
QEMU version : 1.7.1 
Sheepdog version : 0.8.0 


Hardware wise, each node is running with one Core I7 860 (2.8Ghz) CPU (4 cores, Hyperthreading), 8Gb RAM and 2 500Gb SATA hard drives (no RAID), 1x1Gb NIC Ethernet 
System and applications are on the first hard drive, second hard drive is dedicated fully to sheepdog for VDI storage. 
Sheepdog cluster is formated with -c 2 number of copies (so it is pure replication in this setup) and sheep daemon is launched with these options : /usr/sbin/sheep -n --pidfile /var/run/sheep.pid /var/lib/sheepdog/ 
No dedicated journal also. 


On the node itself 
============= 


Here is what I get from the disk dedicated to Sheepdog : 


hdparm -tT /dev/sdb : 

Timing cached reads: 16952 MB in 2.00 seconds = 8484.14 MB/sec 
Timing buffered disk reads: 376 MB in 3.00 seconds = 125.23 MB/sec 




Now, from a VM 
============ 


hdparm -tT /dev/vda : 

Timing cached reads: 13110 MB in 2.00 seconds = 6562.95 MB/sec 
Timing buffered disk reads: 636 MB in 3.08 seconds = 206.70 MB/sec 





dd if=/dev/zero of=/tmp/test bs=1M count=1024 
1024+0 records in 
1024+0 records out 
1073741824 bytes (1.1 GB) copied, 9.6222 s, 112 MB/s 





dd if=/dev/zero of=/tmp/test bs=1M count=1024 oflag=direct 
1024+0 records in 
1024+0 records out 
1073741824 bytes (1.1 GB) copied, 11.9081 s, 90.2 MB/s 



dd if=/dev/zero of=/tmp/test bs=1M count=1024 oflag=sync 
1024+0 records in 
1024+0 records out 
1073741824 bytes (1.1 GB) copied, 16.4097 s, 65.4 MB/s 


As you can see, concerning this very basic setup, performances are pretty honest and using it daily, it is very responsive and working great. 


Regards, 


----- Mail original -----

De: "飞" <duron800 at qq.com> 
À: "Walid Moghrabi" <walid.moghrabi at lezard-visuel.com> 
Envoyé: Lundi 28 Avril 2014 10:21:42 
Objet: 回复: [sheepdog-users] sheepdog vs ceph question 


Hi Walid , thank you very much for your reply, I have some qeustion: 
1. what FS do you use ? xfs of ext4 ? 
2. do you use object cache ? if so,what size of you cache size ? 
3. do you use dedicated journal disk ? and do you have drop the O_SYNC for the backend storage ? 
4. do you use Erasure Code for your cluster/vdi ? 

5. do you have a performace report with qemu? I would like to read it. 


------------------ 原始邮件 ------------------ 

发件人: "Walid Moghrabi";<walid.moghrabi at lezard-visuel.com>; 
发送时间: 2014年4月28日(星期一) 下午3:26 
收件人: "sheepdog-users"<sheepdog-users at lists.wpkg.org>; 

主题: Re: [sheepdog-users] sheepdog vs ceph question 


Hi, 


Well, performance wise, I didn't see much differences betwwen both but Sheepdog is by far easier to configure, use and maintain. 
I also see a big difference in performance while doing a recover from a failing node. 
It might certainly be fine tuned but with default settings, node recovery with Ceph is extremely ressource hungry and if your network and ios are not properly separated, it does impact performances a lot (but it recovers faster). 
Sheepdog on the other side is really great when it comes to recovery, impacts are very low and even on common hardware, and even without splitting networks, you can use your VMs with a slight performance penalty but this is really not a big problem. 


Personnaly, I use both and I have a slight preference on Sheepdog. 



----- Mail original -----

De: "飞" <duron800 at qq.com> 
À: "sheepdog-users" <sheepdog-users at lists.wpkg.org> 
Envoyé: Lundi 28 Avril 2014 08:00:06 
Objet: [sheepdog-users] sheepdog vs ceph question 

hello, I want to using sheepdog and ceph for qemu, 
Do you have report on the performance test comparing the sheepdog and ceph ? thank you. 
by the way, is Taobao.Inc use sheepdog for production ? 
-- 
sheepdog-users mailing lists 
sheepdog-users at lists.wpkg.org 
http://lists.wpkg.org/mailman/listinfo/sheepdog-users 



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.wpkg.org/pipermail/sheepdog-users/attachments/20140505/ec5425aa/attachment-0005.html>


More information about the sheepdog-users mailing list