[sheepdog-users] runtime requirements?

Miles Fidelman mfidelman at meetinghouse.net
Thu Mar 13 20:33:43 CET 2014


Andrew, Thanks for the info!

One follow-up if I  might:  When you say "I'm running qemu 1.7 on all 
nodes" - does that include the storage-only machines, or just the 
machines hosting VMs?

Miles

Andrew J. Hobbs wrote:
> Should have read farther.  :)
>
> Our production system consists of a mish-mash of machines.  Four
> workstations that were retired from normal use, and two actual servers.
> I have two additional rackmount units that will be added soon.  All VMs
> run on the rackmount servers, while the workstations provide additional
> drive space for sheepdog. Technically, two of those could host VMs but
> as they don't have that much memory, I leave them as storage nodes.  We
> have roughly 16TB of storage all together running on 0.7.7 (although I
> will be upgrading to 0.8.0 today if cluster snapshot works for me), with
> 11 server VMs virtualized on the cluster.  1.7TB actually in use. VMs
> range from an internal only NFS home directory server for our Linux
> labs, to two external webservers: one running drupal for our website,
> the other running moodle for class interaction with students.  VPN
> services, puppet configuration server, etc also run on the cluster.
>
> I have one node that is iffy and will die every few weeks.  It's not a
> sheepdog issue, but rather a Dell PERC i710 issue on that server.
> Average time to rebalance 1.7TB when it fails?  Well, happened today at
> 11am, finished by 12:45.  While I'm using this as an opportunity to do
> an upgrade, I could have restarted sheepdog and the images should have
> simply resumed.
>
> I'm running qemu 1.7 on all nodes, and we use kvm as the primary
> technology rather than xen.  Nodes are running 13.10 Ubuntu, with
> hugepages enabled.  I leave 4G for the host, the rest is allocated to
> hugepages.  I originally had nodes running btrfs, until I had a hard
> failure of btrfs on the iffy node.  I've since moved all nodes to ext4.
>
> All nodes are interconnected via gigabit ethernet.  The four
> workstations have a single interface, which I bind to Open-vswitch, and
> create tagged interfaces on.  Sheepdog runs on its own vlan with no
> external access.  VMs then bridge onto ports they need access to,
> whether it's internal only, external only, or a guest only network.  The
> servers have four discrete nics, each of which is on a separate vlan.
>
> I've been very happy with the solution, as have the faculty here. Plans
> are to extend the cluster in the future.
>
> On 03/13/2014 01:16 PM, Miles Fidelman wrote:
>> Andrew: What's your environment look like (what's Sheepdog running on,
>> what kinds of virtualization environment, what kinds of VMs?).
>>
>> Thanks!
>>
>> Miles
>>
>>
>> Andrew J. Hobbs wrote:
>>> While I haven't personally had to use them in this case (yet).  I'd
>>> try the iSCSI or NBD options for sheepdog.   I routinely run
>>> benchmarks when trying new combinations, if you do for both of these
>>> situations, I'd be very interested in your results.
>>>
>>> I keep a linux image with a fully allocated disk to run benchmarks on
>>> with the below command.
>>>
>>>
>>> dd if=/dev/zero of=/test.out bs=4k count=1000000 oflag=direct
>>>
>>>
>>>
>>> On 03/13/2014 10:58 AM, Miles Fidelman wrote:
>>>
>>> Since I'm running Xen paravirtualized (for speed in all cases, and
>>> for 2 servers, because they don't have
>>> hardware virtualization available), QEMU based drivers aren't
>>> available - hence my reasons for not
>>> being able to use Sheepdog in the past.
>>>
>>> iSCSI solves the problem.  So would NFS.
>>>
>>> Thanks!
>>>
>>> Miles
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>
>


-- 
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra




More information about the sheepdog-users mailing list