[sheepdog-users] 50.000 iops per VM
Stefan Priebe - Profihost AG
s.priebe at profihost.ag
Wed Jul 4 09:08:49 CEST 2012
Hi Christoph,
thanks for your reply. Here are some more details.
Am 04.07.2012 07:07, schrieb Christoph Hellwig:
> On Tue, Jul 03, 2012 at 10:28:17PM +0200, Stefan Priebe wrote:
>> For testing purposes i had:
>>
>> 4x dedicated sheep nodes:
>> - Intel Xeon E5 8 Core 3,6Ghz
>> - 64GB Memory
>> - 10GBE
>> - 4x Intel 240GB SSD (write: 250MB/s and 25.000 iop/s random 4k)
>
> exact git revision of sheepdog
7c62b6e935b1943c57139a00d1b7d322c8a9c521
>> On the KVM Machine i used a sheep process acting as the gateway with
>> the following options: -G -i 32 -g 32 -W
>
>> On each sheep node i used one sheep per disk: -g 32 -i 32 -W
>
> filesystems on the nodes?
Kernel: 3.5-rc5
FS: XFS with mkfs.xfs -n size=64k
Mount options (from /proc/mounts):
rw,noatime,nodiratime,attr2,filestreams,nobarrier,inode64,noquota
> I/O scheduler in the guest and the host?
I tested noop, deadline and cfs i wasn't able to see any differences
> sheep git revision? qemu release?
Sheep: 7c62b6e935b1943c57139a00d1b7d322c8a9c521
Qemu-kvm: 1.1 stable
> In general I doubt anyone has optimized sheepdog for iops and low
> latency at this moment as other things have kept people. There's
> some relatively low hanging fruit like avoiding additional copies
> in the gateway, but your numbers still sound very low.
>
> Can you also do a perf record -g on both a storage node and the
> kvm box to see if there's anything interesting on them?
Option -g is not known by my perf command?
is a perf record sleep 10 enough? Should i upload then the data file
somewhere?
Snapshot of perf top from KVM host:
14.96% [kernel] [k] _raw_spin_lock
8.13% kvm [.] 0x00000000001d8084
4.08% [kernel] [k] get_pid_task
3.91% [kvm] [k] kvm_vcpu_yield_to
3.83% [kernel] [k] yield_to
2.62% [kernel] [k] __copy_user_nocache
2.53% [kvm] [k] vcpu_enter_guest
1.95% [kvm_intel] [k] vmx_vcpu_run
1.69% [kernel] [k] __srcu_read_lock
1.24% [kernel] [k] _raw_spin_lock_irqsave
1.19% [kvm] [k] kvm_vcpu_on_spin
1.13% [kvm] [k] __vcpu_run
1.06% [kernel] [k] __schedule
Snapshot of perf top from sheep node to which the qemu-kvm block device
connects:
4,37% [kernel] [k] clear_page_c
3,09% libgcc_s.so.1 [.] 0x00000000000118f7
2,44% libc-2.11.3.so [.] 0x000000000007518e
1,95% [kernel] [k] __schedule
1,82% [kernel] [k] _raw_spin_lock
1,21% [kernel] [k] _raw_spin_lock_irqsave
1,07% libc-2.11.3.so [.] vfprintf
1,07% [kernel] [k] zap_pte_range
1,00% [kernel] [k] copy_user_generic_string
0,80% [kernel] [k] ahci_port_intr
0,79% [kernel] [k] arch_dup_task_struct
0,70% [kernel] [k] menu_select
0,68% [kernel] [k] fget_light
0,67% [kernel] [k] _raw_spin_lock_irq
0,65% libpthread-2.11.3.so [.] pthread_mutex_lock
0,65% [kernel] [k] __switch_to
0,59% [kernel] [k] __slab_free
Snapshot of perf top from sheep node (not acting as the gateway / target
for kvm):
2,78% libgcc_s.so.1 [.] 0x000000000000e72b
2,21% [kernel] [k] __schedule
2,14% [kernel] [k] ahci_port_intr
2,08% [kernel] [k] _raw_spin_lock
1,77% [kernel] [k] _raw_spin_lock_irqsave
1,15% [kernel] [k] ahci_interrupt
1,11% [kernel] [k] ahci_scr_read
0,94% [kernel] [k] kmem_cache_alloc
0,90% [kernel] [k] _raw_spin_lock_irq
0,81% [kernel] [k] menu_select
0,76% [kernel] [k] _raw_spin_unlock_irqrestore
0,73% libpthread-2.11.3.so [.] pthread_mutex_lock
0,70% libc-2.11.3.so [.] vfprintf
Stefan
More information about the sheepdog-users
mailing list