[sheepdog] [PATCH 1/11] sheep: Documentation, section "Sheepdog Basic", add chapter "concepts"

Valerio Pachera sirio81 at gmail.com
Wed Oct 9 17:11:19 CEST 2013


Signed-off-by: Valerio Pachera <sirio81 at gmail.com>
---
 doc/concepts.rst |   32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)
 create mode 100644 doc/concepts.rst

diff --git a/doc/concepts.rst b/doc/concepts.rst
new file mode 100644
index 0000000..b5f16d5
--- /dev/null
+++ b/doc/concepts.rst
@@ -0,0 +1,32 @@
+Concepts
+========
+
+To put it simply, Sheepdog **splits** guests' disks into pieces (chunks) and 
+spreads them all over the hosts of our cluster.
+If we imagine the hosts as if they were disks, then it will look similar 
+to a RAID 0.
+Like in a RAID 0, the loss of a single host would mean the loss of all data.
+But sheepdog **keeps more copies** for each chunk (default is 3) so that, 
+no matter which host you stop, there will always be enough chunks to form 
+the virtual disks.
+Chunks are called "objects" in the Sheepdog context.
+When a host goes down, some objects will have "only" 2 copies instead of 3.
+The cluster is going to replicate the missing copies as soon as possible on 
+the active nodes.
+
+For each node, we need 2 running programs:
+
+- corosync (or another cluster manager)
+- sheep
+
+Corosync will take care of noticing if any host joins or leaves the cluster.
+It has to be run before *sheep*.
+Sheep is going to manage the storage (write objects to disk).
+If the host will lose contact with the other nodes (e.g. cable disconnection), 
+the *sheep* process will die.
+
+The only mandatory option we have to pass to *sheep* command is a folder path.
+The folder has to be a mount point for a device 
+(partition, logical volume, etc.).
+You must make sure that no other programs write any data in that directory.
+That means it has to be empty the first time you run *sheep*.
-- 
1.7.10.4




More information about the sheepdog mailing list