[sheepdog-users] [Sheepdog Announcement] Erasure coding is fully functional with Sheepdog now
Andrew J. Hobbs
ajhobbs at desu.edu
Tue Oct 22 16:00:19 CEST 2013
I love the work you've been doing for this but I do have some questions
regarding erasure.
In your notes, you mention specifying the policy such as 4:2, how does
this map to physical nodes? Would that mean you could use that 4:2 for
6 or more nodes? How does that work exactly? Would that mean the 16:15
configuration requires 31 nodes or more?
If your node count exceeds the stripe/parity setting does the data get
rotated across nodes so all nodes contribute to throughput?
On 10/22/2013 08:01 AM, Liu Yuan wrote:
> Hello all,
>
> Apologies if this mail sounds annoying to you.
>
> Sheepdog is a distributed object storage system for QEMU VM and RESTful
> services (in progress).
>
> Openstack users can make use of Sheepdog as Cinder and Glance storage as of now
> and Swift compatitable API is working in progress.
>
> Yuan will introduce sheepdog in the openstack Hong Kong summit and slides are
> already available at
> http://www.slideshare.net/multics/sheepdog-yet-another-all-inone-storage-for-openstack-27402520
>
> You can see more info about this talk at
> http://openstacksummitnovember2013.sched.org/event/706dc3952a8917aa74998e047d015e6a#.UmZnNYYW31E
>
> Erasure coding is now seamlessly functional with all other features such as
> snapshot/clone/cluster-wide snapshot/multi-disk/auto-healing e.c with following
> characteristics:
>
> 1 Data is erasure coded automatically while being written to sheepdog storage, no extra operations.
> 2 Support random read/write, in-place-update, misaligned read/write
> 3 Support to run any type VM images or attach as a vdisk of a VM
> 4 User-defined coding scheme on a VDI basis
> 5 Better read/write performance compared to fully replication scheme
> 6 A single cluster can both support replicated VDI or erasure-coded VDI
>
> You can get more info at
>
> https://github.com/sheepdog/sheepdog/wiki for a general wiki
> https://github.com/sheepdog/sheepdog/wiki/Erasure-Code-Support for erasure coding
>
> Anyone who is interested can give it a try if you are confortable with command
> line:
>
> $ git clone https://github.com/sheepdog/sheepdog.git
>
> For a 10 minutes quick start with a single machine, you can try:
> (Assume you are debian based system)
>
> # Compile from the source
>
> $ sudo apt-get install liburcu-dev
> $ git clone git://github.com/sheepdog/sheepdog.git
> $ cd sheepdog
> $ ./autogen.sh; ./configure --disable-corosync
> $ make; sudo make install
>
> # Create a 6 node cluster with local driver
> $ for i in `seq 0 5`; do sheep /tmp/store$i -n -c local -z $i -p 700$i;done
> $ dog cluster format
>
> # Create a replicated thin-provisioned 20G volume with 3 copies
> $ dog vdi create -c 3 replica 20G
> $ dog vdi list # show vdi list
> $ dog node info # show node information
> $ dog cluster info # show cluster infomation
>
> # Create a erasure-coded (4 data strips and 2 parity strips) 20G volume
> $ dog vdi create -c 4:2 erasure 20G
>
> # Now you should have 2 vdi created
> $ dog vdi list
>
> # You can install OS on these volumes with upstream QEMU
> $ qemu-system-x86_64 -m 1024 --enable-kvm \
> -drive file=sheepdog:erasure,if=virtio -cdrom path_to_your_iso
>
> # or attach the volumes with existant VM
> $ qemu-system-x86_64 -m 1024 --enable-kvm \
> -drive file=your_image,if=virtio -drive file=sheepdog:erasure,if=virtio
>
> # Take a live disk-only snapshot of running VM
> $ dog vdi snapshot -s tag erasure
>
> # Mount the volume(vdi) to local storage file system
> $ sheepfs dir
> $ echo erasure > dir/vdi/mount
> # then you can do whatever with the mounted file at dir/vdi/volume/erasure
>
> Have fun
>
> And feedbacks are always welcome.
>
> Thanks
> Yuan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ajhobbs.vcf
Type: text/x-vcard
Size: 353 bytes
Desc: ajhobbs.vcf
URL: <http://lists.wpkg.org/pipermail/sheepdog-users/attachments/20131022/2f684113/attachment-0005.vcf>
More information about the sheepdog-users
mailing list