[sheepdog-users] SIGABRT when doing: dog vdi check

Marcin Mirosław marcin at mejor.pl
Sat Jan 4 16:13:27 CET 2014


W dniu 2014-01-04 06:28, Liu Yuan pisze:
> On Fri, Jan 03, 2014 at 10:51:26PM +0100, Marcin Mirosław wrote:
>> Hi!

Hi all!

>> I'm new on "sheep-run";) I'm starting to try sheepdog so probably
>> I'm doing many things wrongly. I'm playing with sheepdog-0.7.6.
>> 
>> First problem (SIGABRT): I started multi sheep daemeon on
>> localhost: # for x in 0 1 2 3 4; do sheep -c local -j size=128M
>> -p 700$x /mnt/sheep/metadata/$x,/mnt/sheep/storage/$x; done
>> 
>> Next: # dog cluster info Cluster status: Waiting for cluster to
>> be formatted
>> 
>> # dog cluster format -c 2:1
> 
> 0.7.6 doesn't support erasure code. Try latest master branch

Now I'm on 486ace8ccbb [master]. How I should check choosen redundancy?
 # cat /mnt/test/vdi/list
   Name        Id    Size    Used  Shared    Creation time   VDI id
Copies  Tag
   testowy      0  1.0 GB  0.0 MB  0.0 MB 2014-01-04 15:07   cac836     3

Here I can see 3 copies, can't see info about how many parity strips
is configured. Probably this isn't implemented yet?

>> # dog vdi create testowy 5G # gdb -q dog Reading symbols from
>> /usr/sbin/dog...Reading symbols from 
>> /usr/lib64/debug/usr/sbin/dog.debug...done. done. (gdb)  set args
>> vdi check testowy (gdb) run Starting program: /usr/sbin/dog vdi
>> check testowy warning: Could not load shared library symbols for
>> linux-vdso.so.1. Do you need "set solib-search-path" or "set
>> sysroot"? warning: File "/lib64/libthread_db-1.0.so" auto-loading
>> has been declined by your `auto-load safe-path' set to 
>> "$debugdir:$datadir/auto-load". To enable execution of this file
>> add add-auto-load-safe-path /lib64/libthread_db-1.0.so line to
>> your configuration file "/root/.gdbinit". To completely disable
>> this security protection add set auto-load safe-path / line to
>> your configuration file "/root/.gdbinit". For more information
>> about this security protection see the "Auto-loading safe path"
>> section in the GDB manual.  E.g., run from the shell: info
>> "(gdb)Auto-loading safe path" warning: Unable to find
>> libthread_db matching inferior's thread library, thread debugging
>> will not be available. PANIC: can't find next new idx
> 
> seems that 0.7.x series is cracky about it. Hitoshi, can you verify
> this?

Now I'm getting:
# dog  cluster check
fix vdi testowy
PANIC: can't find a valid vnode
dog exits unexpectedly (Aborted).


Hmm, meseems master is broken at this moment:
# du -h /mnt/sheep/storage/*
4,0K    /mnt/sheep/storage/0
4,0K    /mnt/sheep/storage/1
4,0K    /mnt/sheep/storage/2
4,0K    /mnt/sheep/storage/3
4,0K    /mnt/sheep/storage/4
4,0K    /mnt/sheep/storage/5


>> I'd like ask you for advice about proper (for my purposes)
>> configuration of sheep "cluster". I'd like to prepare one-node
>> storage for keeping backups. I'm going to use a few HDDs (from 2
>> to 5 units) (I think I need to use "Multi disk on Single Node
>> Support"). I'd like to have enough redundancy to survive one HDD
>> failure ( I'm thinking about using "Erasure Code Support" and 2:1
>> or 4:1 redundancy). Also I'd like to have flexibility of adding
>> or removing HDD from sheepdog's cluster. (I think that such kind
>> of flexibility mentioned isn't possibly). After reading wiki I
>> think almost everything above is possible, am I right?
> 
> Yes, Sheepdog's Multi-Disk feature support hotplug, hotunplug of
> any number of disks to any node. node here means host that have one
> or more disks.
> 
>> Should I use one daemon per node or multi sheeps on one node to
>> do it? (I think one daemon is enough but wiki says: "You need at
>> least X alive nodes (e.g, 4 nodes in 4:2 scheme) to serve the
>> read/write request. If number of nodes drops to below X, the
>> cluster will deny of service. Note that if you only have X nodes
>> in the cluster, it means you don't have any redundancy parity
>> generated." So I'm not sure if one or multi daemon mode I should
>> configure.
> 
> If you have only one storage host but want to sheepdog to manage it
> like raid5, then you need 1:1 map, that one daemon per disk. This
> means you actually run N nodes in the same host. For example, you
> have 5 disks with 5 daemons setup, and you can format as 4:1, then
> cluster(all nodes happen to run in the same node) will manage all 5
> disks exactly like raid5.
> 
>> Last question is about checksumming of data. Is it better to lay
>> sheep on ext4 and use btrfs/zfs on the VDI or lay sheep on btrfs
>> and use ext4 on top of VDI?
>> 
> 
> Either is okay, since sheepdog provide block device abstraction and
> don't care how you use it. For file system that you lay sheep on,
> ext4 or xfs is suggested since we just expect posix xattr support
> on the underlying filesystem.

> By the way, if you are only interested in block device (not for
> VM), you can take a look at iSCSI
> (https://github.com/sheepdog/sheepdog/wiki/General-protocol-support-%28iSCSI-and-NBD%29#iscsi)
>
>  Which probably will outperform sheepfs because sheepfs is based on
> fuse and performanc is heavily constrained by it.


Thank you for all advices. It easier to start tests using fuse but in
future I'll try to use iSCSI (and compare performance, just for
curiosity:)).
I'm sorry for my brief answers, they are short because I don't want
hurt english language too often;)

Marcin




More information about the sheepdog-users mailing list