[sheepdog] [PATCH 2/9] Doc. "Sheepdog Advanced" add chapter "multidevice"

Valerio Pachera sirio81 at gmail.com
Fri Oct 11 17:53:11 CEST 2013


Signed-off-by: Valerio Pachera <sirio81 at gmail.com>
---
 doc/multidevice.rst |  167 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 167 insertions(+)
 create mode 100644 doc/multidevice.rst

diff --git a/doc/multidevice.rst b/doc/multidevice.rst
new file mode 100644
index 0000000..690c02b
--- /dev/null
+++ b/doc/multidevice.rst
@@ -0,0 +1,167 @@
+Multidevice
+===============
+
+In the Basic section we considered only the case of using a single disk per host.
+Sheepdog is able to manage **several disks with a single daemon**.
+In previous sheepdog versions, the only way to use more disks per host, was to
+run more sheep daemons.
+From v0.6 Multi Device has been introduced and a single daemone in able to
+manage as many device as you wish.
+It also avoid the generation of network traffic when removing a disk.
+
+
+
+Plug a device
+**************
+
+Scenario: you stared your cluster on a single device per host, and now you wish
+to expand it by adding a second disk.
+
+E.g.:
+a host with 3 disks:
+
+::
+
+    sda (used by Linux)
+    sdb1 (already used by sheepdog)
+    sdc1 (the new device to add)
+
+Before adding the second disk
+
+::
+
+    # dog node md info
+    Id      Size    Used    Avail   Use%    Path
+    Node 0:
+    0      2.0 GB  1.6 GB  419 MB   79%    /mnt/sheep/dsk01/obj
+
+
+Add the second disk to the cluster
+
+::
+
+    # dog node md plug /mnt/sheep/dsk02
+    
+After adding the second disk
+
+::
+    
+    # dog node md info
+    Id      Size    Used    Avail   Use%    Path
+    Node 0:
+    0      2.0 GB  1.6 GB  419 MB   79%    /mnt/sheep/dsk01/obj
+    1      482 MB  424 MB  58 MB    87%    /mnt/sheep/dsk02
+    
+Do not look at sizes, they will not match yours.
+
+We need now to make some considerations:
+
+1. The first disk is using the sub-folder obj, the second doesn't
+2. A recovery is started, but only on the actual node
+3. Plugging , do not change the weight of a node
+
+1. When using a single disk (sheep /mnt/sheep/dsk01), some folders and files are
+   created automatically (cache  config  epoch  lock  obj  sheep.log  sock).
+   The data part (the objects) are store in the sub-folder '*obj*'.
+   We consider the other files and directories as 'meta data'.
+   Only one meta data directory is required per daemon so the folder/device we
+   added is going to store only more objects.
+
+2. Run *'dog node recovery'* and you'll notice that only the node you add the
+   disk to, is recovering.
+   When you add a new host, all the nodes of the cluster is receiving data
+   (see next point).
+   The node is now balancing the object between its own disks only.
+
+3. **IMPORTANT:** You can see that by *'dog node list'*. This is done on purpose
+   to avoid to trigger a cluster wide recovery when plug/unplug a disk.
+   **This also means that your cluster capacity is not yet increased!**
+   So run *'dog cluster reweigh'* when your cluster is less loaded to avoid
+   lowering performance due to the recovery.
+
+Start the cluster with multiple devices
+***************************************
+
+If you stop the cluster now, you can't run the daemon as before 
+sheep /mnt/sheep/dsk01)!
+You have to use a different syntax:
+
+::
+
+    sheep /mnt/sheep/dsk01,/mnt/sheep/dsk01/obj,/mnt/sheep/dsk02
+    
+**IMPORTANT: the first directory has to be the meta data directory!**
+Using this syntax is like saying to sheepdog: "use /mnt/sheep/dsk01 as meta data
+directory, then use /mnt/sheep/dsk01/obj and /mnt/sheep/dsk02 to store objects.
+As you may notice, it doesn't matter where we store the objects, and that's
+going to be even more clear on the next chapters.
+
+Unplug a device
+***************
+
+If we notice that a disk is not performing well and we wish to substitute it,
+we simply have to run
+
+::
+
+    dog node md unplug /mnt/sheep/dsk02
+
+We can now safely un-mount the device, change the physical, format the new one
+and mount it.
+Now we can plug it back the same way we plugged it in the first time.
+
+**BEWARE**: in this example you can't remove the first disk because it contains
+the meta data for all the other disks!
+
+
+
+Separate mata data folder
+*************************
+
+You don't really need to do that, but it may be good for a few reasons.
+As described in the previous chapter, we can't remove the disk containing the
+meta data dir.
+It's better to place that directory in the same physical device where the
+operation system is.
+
+Another reason may be to use a faster device for caching.
+Meta data directory also contains the object cache, if enabled.
+If you store that on a ssd device, you'll enjoy of its performance.
+
+Let's say our linux distro is installed on the ssd.
+sda1 is used by Linux, sda3 may be used for caching
+(mount point /mnt/sheep/meta).
+
+::
+
+    sheep -w size 20000 /mnt/sheep/meta,/mnt/sheep/dsk01,/mnt/sheep/dsk02
+    
+**IMPORTANT** Following the previous example, **before** running sheep with
+these options, we have to move a few things (the meta data).
+Stop all the guests and the cluster; then
+
+::
+
+    cd /mnt/sheep/dsk01
+    mv cache config epoch epoch sheep.log sock /mnt/sheep/meta
+    mv obj/* ../
+    rmdir obj
+    
+Monitor the device usage
+************************
+
+::
+
+    dog node md info --all
+    
+It will give you all the information of all the hosts with a single command.
+
+Multiple Device on a new host
+*****************************
+
+If you have a new host, you can start sheep using all 3 devices right away.
+You **don't** need to start on a single device first and plug other device later.
+
+::
+
+    sheep -w size 20000 /mnt/sheep/meta,/mnt/sheep/dsk01,/mnt/sheep/dsk02
\ No newline at end of file
-- 
1.7.10.4




More information about the sheepdog mailing list