[sheepdog] [sheepdog-users] What is cooking in master # 2013.5.23

MORITA Kazutaka morita.kazutaka at gmail.com
Fri May 24 02:50:17 CEST 2013


At Thu, 23 May 2013 21:50:58 +0800,
Liu Yuan wrote:
> 
> Hello list,
> 
>    I am setting out to write a series about what we are doing in the
> development list to keep you guys update in a more user friendly
> language, that in English instead of C. I'll try to keep up every month
> or two about this topic in the mailing list.
> 
>   Since this is the first series, I'd like to start with what we have
> done in the past.
> 
>   Sheepdog was initially targeted for distribution storage solution for
> QEMU block device. QEMU's Sheepdog block driver was implemented at a
> protocol layer, the lowest layer in the QEMU software. This is similar
> to QEMU's NBD but end up more powerful. Sitting at the first floor, we
> get the benefit that we can store whatever formats we want and many
> fancy features like live migration, snapshot, clone is supported
> natively by the protocol. This means you can not only store 'raw'(our
> default format) image in the sheepdog to enjoy best performance but also
> enjoy advanced features like snapshot with 'raw' format.
> 
>   To summarize, what we have done for QEMU:
> 
>   1 Seamless integration of the QEMU software, users can use QEMU
> friends tools like qemu-img, qemu-io to manipulate Sheepdog images.
>   2 support live migration, live & offline snapshots
> (savevm/loadvm/qemu-img snapshot), clone
>   3 Provide thin-provision:
> 	a Discard/Trim support
>    	b Sparse image
>  	3 Copy-on-write used heavily for snapshot & clone
>   4 Users can specify copies number per volume
>   5 Users can store/boot ISO in the Sheepdog cluster as a volume
> 
>   What we have done for Sheepdog storage
> 
>   1 Provide replicated storage for volumes based on object store with
> linear scalability.
>   2 Auto data-healing
>   3 Intelligent node management
>   4 Manage multiple disks on a single node intelligently.
>   5 Easy to use, For e.g, one liner to setup, destroy, no configuration file
>   6 Dual card support
>   7 Object cache and journaling to make best performance and support
> hierarchical storage (mixd SSD & SATA & SAS)
>   8 Sheepfs to export volume as file-like abstraction to local file
> system storage.
>   9 Image snapshot chain incremental backup
> 
>   What we have done for Openstack project:
> 
>   1 Cinder support to use Sheepdog cluster as volume store.
> 
>   What we have done  for Libvirt project:
> 
>   1 Storage pool support
> 
> *******************************************************************
> 
> So what we are cooking right now in the development mailing list are:
> 
>   1 Cluster wide snapshot to backup the whole cluster into a central storage
> 
>     URL: http://lists.wpkg.org/pipermail/sheepdog/2013-May/009532.html
>     status: first draft version is merged.
>     features done: incremental backup, auto-deduplication
>     features planned: finer units to get better dedup, compression,
> support for other storage like S3, Swift, NFS as backend store for the
> back-up objects.
> 
>   2 Openstack Glance support
> 
>     URL: https://review.openstack.org/#/c/29961/
>     status: WIP (Work In Progress)
>     Nothing interesting, just allow people to store images in Sheepdog
> via Openstack
> 
>   3 Plans for the next release and form a new organization
> 
>     URL: http://lists.wpkg.org/pipermail/sheepdog/2013-May/009568.html
>     As usual, we want to hear more options and this topic becomes 'what
> we need to do for v1.0 release'
> 
>   4 Support for unaligned read/write/create operations on images
> 
>     URL: http://lists.wpkg.org/pipermail/sheepdog/2013-May/009615.html
>     status: merged.
>     This is the first step to support file-like object to user application.
> 
>   5 Use hash for vdi check and object recovery
> 
>     URL: http://lists.wpkg.org/pipermail/sheepdog/2013-May/009298.html
>     Status: merged.
>     This improves the vdi check and object recovery performance by
> simply compares the hash digest of objects. And check performance
> against large images is dramatically improved because of threaded check.
> 
> 
> *********************************************************************
> 
> Future Plans:
> 
>   1 Looks that QEMU auto-reconnect after sheep restarts is required by
> several users, so this will be on our top most list.
>   2 Sheepdog is essentially object store, on top of which we support
> volumes. So we'll definitely extend it to greater audience, support S3
> or Swift API to store user's object with arbitrary length in a RESTful
> manner is in the todo list.
>   3 Extend Sheepfs to support native POSIX file operation
> 
> Have fun with Sheepdog !

Great report, thanks a lot!

Kazutaka



More information about the sheepdog mailing list