[sheepdog-users] Some question from new user

Cristian Del Carlo cristian.delcarlo at targetsolutions.it
Fri Nov 17 09:22:24 CET 2017


Hi,

i use sheepdog in production in a small cluster of 3 nodes and i am
planning to use it in an other cluster of 4 nodes.

As we can read in the following document on page 28 "Sheepdog satisfies all
features for block storage and
is ready for commercial use from v0.9.1 or later".

http://events.linuxfoundation.jp/sites/events/files/slides/COJ2015_Sheepdog_20150604.pdf

In my little experience I recommend using v1.01 with zookeeper.

Why do these doubts about the use in production?

Thanks,


2017-11-17 8:04 GMT+01:00 Vasiliy Tolstov <v.tolstov at selfip.ru>:

> 2017-11-16 16:00 GMT+03:00 Raymond Burkholder <ray at oneunified.net>:
> > If you don't mind some questions:
> >
> > On 11/16/2017 03:42 AM, Vasiliy Tolstov wrote:
> >>
> >> 2017-11-15 22:49 GMT+03:00 Gandalf Corvotempesta
> >> <gandalf.corvotempesta at gmail.com>:
> >>>
> >>> 1) is sheepdog stable and ready for production use?
> >>
> >>
> >> It contains rough edges firstly with cluster recovery when nodes
> up/down.
> >
> >
> > Even with the rough edges, do you think it is good enough to run?  You
> are
> > still using it?
>
> I'm not using it. I'm test sheepdog, ceph and scst. When i'm poweroff
> cycle sheepdog node (1 node with 64 disks) sheepdog started and write
> to error log
> do_epoch_log_read(178) invalid epoch 1 log, also my tests (i'm use
> infiniband interconnect) displays that scst based solution much faster
> then sheepdog or ceph
>
>
> >
> >>
> >>> 2) do I need any metadata servers (like Ceph) or the only needed
> >>> servers are the storage ones (like Gluster) ?
> >>
> >>
> >> No metadata needed, but you need cluster manager (zookeeper, corosync).
> >> I'm try corosync and it not very stable (not sheepdog, but corosync)
> >> Sometimes i have split bran when cluster
> >> divided on parts and only restart corosync solved this issue. (i'm try
> >> corosync 2.x)
> >
> >
> > Do you lose data when you encounter these situations or are you able to
> > recover successfully and return to normal operation?
> >
>
>
> Data is safe in this case, but you need manual steps to fixup cluster.
>
>
> >>
> >>> 3) can I use multiple disks in a server, or each server should
> >>> "expose" a single filesystem (thus, I have to use RAID)
> >>
> >>
> >> Sheepdog supports multidisk but you need metastore to save epoch and
> >> config data so raid is preferrable.
> >
> >
> > So, just to confirm, you recommend a raid system of some sort on each
> host,
> > in addition to the replication across hosts which Sheepdog supplies?  Is
> > there some reference information I could read about the metastore, epoch,
> > and config data?
> >
>
> https://github.com/sheepdog/sheepdog/wiki/Multi-disk-on-
> Single-Node-Support
>
> >>
> >>> 7) any "scrub" feature to be sure that all object a replicated
> >>> properly and not subject to "bit-rot" ?
> >>
> >>
> >> No, but sheepdog have cluster check (in my case works sometimes with
> >> errors)
> >
> >
> > What sort of errors do you see?  Is there something to fix or do you
> ignore
> > them?
> >
> >>
>
> I don't know can you ignore it or not, but check sometimes displays
> that i don't have some objects.
>
>
> --
> Vasiliy Tolstov,
> e-mail: v.tolstov at selfip.ru
> --
> sheepdog-users mailing lists
> sheepdog-users at lists.wpkg.org
> https://lists.wpkg.org/mailman/listinfo/sheepdog-users
>



*Cristian *
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.wpkg.org/pipermail/sheepdog-users/attachments/20171117/e6b7c724/attachment.html>


More information about the sheepdog-users mailing list