<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">2013/12/1 Liu Yuan <span dir="ltr"><<a href="mailto:namei.unix@gmail.com" target="_blank">namei.unix@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">On Sun, Dec 01, 2013 at 03:48:04PM +0800, Robin Dong wrote:<br>
> Implement the distributed lock by zookeeper (refer: <a href="http://zookeeper.apache.org/doc/trunk/recipes.html" target="_blank">http://zookeeper.apache.org/doc/trunk/recipes.html</a>)<br>
> The routine is:<br>
> 1. create a seq-ephemeral znode in lock directory (use lock-id as dir name)<br>
> 2. get smallest file path as owner of the lock; the other thread wait on a pthread_mutex_t (cluster_lock->wait)<br>
> 3. if owner of the lock release it (or the owner is killed by accident), zookeeper will<br>
> trigger zk_watch() which will wake up all waiting threads to compete new owner of the lock<br>
><br>
> We use dlock_array to store pointers of cluster_locks in this sheep daemon so when receiving the event of ZOO_DELETED_EVENT<br>
> the program will wake up all waiters (in this sheep daemon) who is sleeping on the lock id and let them compete for new<br>
> owner.<br>
> dlock_array is just a normal array using lock-id as index, so imaging a scenario: two threads (A and B) in one sheep daemon<br>
> call zk_lock() for same lock-id, they will create two znodes in zookeeper but set dlock_array[lock_id] to only one of<br>
> them (for example, set to B). After that, when ZOO_DELETED_EVENT comes, the zk_waiter() will only wake up thread B and thread A<br>
> will sleep on '->wait' forever becuase no one could wakeup him.<br>
> We have two method to solve this problem:<br>
> A. using more complicated structure instead of dlock_array to store both A and B 's lock handle.<br>
> B. adding a lock to avoid A and B call zk_lock() in the same time.<br>
> We prefer method B because it also avoid creating too many files in a directory of zookeeper which will take too much pressure<br>
> on zookeeper server if the number of sheep deamons are huge. Therefore we add 'local_lock' in 'struct cluster_lock'.<br>
><br>
> Signed-off-by: Robin Dong <<a href="mailto:sanbai@taobao.com">sanbai@taobao.com</a>><br>
> ---<br>
> include/sheep.h | 8 +++<br>
> sheep/cluster.h | 34 +++++++++++<br>
> sheep/cluster/corosync.c | 16 +++++<br>
> sheep/cluster/local.c | 16 +++++<br>
> sheep/cluster/zookeeper.c | 140 ++++++++++++++++++++++++++++++++++++++++++++-<br>
> 5 files changed, 213 insertions(+), 1 deletions(-)<br>
><br>
> diff --git a/include/sheep.h b/include/sheep.h<br>
> index 293e057..fd7258b 100644<br>
> --- a/include/sheep.h<br>
> +++ b/include/sheep.h<br>
> @@ -255,6 +255,14 @@ static inline void nodes_to_buffer(struct rb_root *nroot, void *buffer)<br>
><br>
> #define MAX_NODE_STR_LEN 256<br>
><br>
> +/* structure for distributed lock */<br>
> +struct cluster_lock {<br>
> + uint64_t id; /* id of this mutex */<br>
> + pthread_mutex_t wait;<br>
> + pthread_mutex_t local_lock;<br>
> + char ephemeral_path[MAX_NODE_STR_LEN];<br>
> +};<br>
> +<br>
> static inline const char *node_to_str(const struct sd_node *id)<br>
> {<br>
> static __thread char str[MAX_NODE_STR_LEN];<br>
> diff --git a/sheep/cluster.h b/sheep/cluster.h<br>
> index 81b5ae4..f0950ac 100644<br>
> --- a/sheep/cluster.h<br>
> +++ b/sheep/cluster.h<br>
> @@ -109,6 +109,40 @@ struct cluster_driver {<br>
> int (*unblock)(void *msg, size_t msg_len);<br>
><br>
> /*<br>
> + * Init a distributed mutually exclusive lock to avoid race condition<br>
> + * when the whole sheepdog cluster process one exclusive resource.<br>
> + *<br>
> + * This function use 'lock_id' as the id of this distributed lock.<br>
> + * A thread can create many locks in one sheep daemon.<br>
> + *<br>
> + * Returns SD_RES_XXX<br>
> + */<br>
> + int (*init_lock)(struct cluster_lock *lock, uint64_t lock_id);<br>
> +<br>
> + /*<br>
> + * Acquire the distributed lock.<br>
> + *<br>
> + * The cluster_lock referenced by 'lock' shall be locked by calling<br>
> + * cluster->lock(). If the cluster_lock is already locked, the calling<br>
> + * thread shall block until the cluster_lock becomes available.<br>
> + *<br>
> + * This operation will create a seq-ephemeral znode in lock directory<br>
> + * of zookeeper (use lock-id as dir name). The smallest file path in<br>
> + * this directory wil be the owner of the lock; the other threads will<br>
> + * wait on a pthread_mutex_t (cluster_lock->wait)<br>
> + */<br>
><br>
<br>
</div></div>The second paragraph should be moved to zookeeper.c since it is only zk related.<br>
<div><div class="h5"><br>
><br>
> + void (*lock)(struct cluster_lock *lock);<br>
> +<br>
> + /*<br>
> + * Release the distributed lock.<br>
> + *<br>
> + * If the owner of the cluster_lock release it (or the owner is<br>
> + * killed by accident), zookeeper will trigger zk_watch() which will<br>
> + * wake up all waiting threads to compete new owner of the lock<br>
> + */<br>
> + void (*unlock)(struct cluster_lock *lock);<br>
> +<br>
> + /*<br>
> * Update the specific node in the driver's private copy of nodes<br>
> *<br>
> * Returns SD_RES_XXX<br>
> diff --git a/sheep/cluster/corosync.c b/sheep/cluster/corosync.c<br>
> index ea4421b..35c95d3 100644<br>
> --- a/sheep/cluster/corosync.c<br>
> +++ b/sheep/cluster/corosync.c<br>
> @@ -774,6 +774,19 @@ again:<br>
> return 0;<br>
> }<br>
><br>
> +static int corosync_init_lock(struct cluster_lock *dlock, uint32_t id)<br>
> +{<br>
> + return -1;<br>
> +}<br>
> +<br>
> +static void corosync_lock(struct cluster_lock *dlock)<br>
> +{<br>
> +}<br>
> +<br>
> +static void corosync_unlock(struct cluster_lock *dlock)<br>
> +{<br>
> +}<br>
> +<br>
> static int corosync_update_node(struct sd_node *node)<br>
> {<br>
> struct cpg_node cnode = this_node;<br>
> @@ -794,6 +807,9 @@ static struct cluster_driver cdrv_corosync = {<br>
> .notify = corosync_notify,<br>
> .block = corosync_block,<br>
> .unblock = corosync_unblock,<br>
> + .init_lock = corosync_init_lock,<br>
> + .lock = corosync_lock,<br>
> + .unlock = corosync_unlock,<br>
> .update_node = corosync_update_node,<br>
> };<br>
><br>
> diff --git a/sheep/cluster/local.c b/sheep/cluster/local.c<br>
> index b8cbb5c..69d9d6f 100644<br>
> --- a/sheep/cluster/local.c<br>
> +++ b/sheep/cluster/local.c<br>
> @@ -547,6 +547,19 @@ static int local_init(const char *option)<br>
> return 0;<br>
> }<br>
><br>
> +static int local_init_lock(struct cluster_lock *dlock, uint64_t id)<br>
> +{<br>
> + return -1;<br>
> +}<br>
> +<br>
> +static void local_lock(struct cluster_lock *dlock)<br>
> +{<br>
> +}<br>
> +<br>
> +static void local_unlock(struct cluster_lock *dlock)<br>
> +{<br>
> +}<br>
> +<br>
> static int local_update_node(struct sd_node *node)<br>
> {<br>
> struct local_node lnode = this_node;<br>
> @@ -566,6 +579,9 @@ static struct cluster_driver cdrv_local = {<br>
> .notify = local_notify,<br>
> .block = local_block,<br>
> .unblock = local_unblock,<br>
> + .init_lock = local_init_lock,<br>
> + .lock = local_lock,<br>
> + .unlock = local_unlock,<br>
> .update_node = local_update_node,<br>
> };<br>
><br>
> diff --git a/sheep/cluster/zookeeper.c b/sheep/cluster/zookeeper.c<br>
> index fa89c46..01cb37c 100644<br>
> --- a/sheep/cluster/zookeeper.c<br>
> +++ b/sheep/cluster/zookeeper.c<br>
> @@ -30,6 +30,21 @@<br>
> #define QUEUE_ZNODE BASE_ZNODE "/queue"<br>
> #define MEMBER_ZNODE BASE_ZNODE "/member"<br>
> #define MASTER_ZNONE BASE_ZNODE "/master"<br>
> +#define LOCK_ZNODE BASE_ZNODE "/lock"<br>
> +<br>
> +#define MAX_MUTEX_NR 4096<br>
> +#define WAIT_TIME 1 /* second */<br>
> +<br>
> +static struct cluster_lock **dlock_array;<br>
> +<br>
> +/*<br>
> + * Wait a while when create, delete or get_children fail on<br>
> + * zookeeper lock so it will not print too much loop log<br>
> + */<br>
> +static void zk_wait(void)<br>
> +{<br>
> + sleep(WAIT_TIME);<br>
> +}<br>
><br>
> /* iterate child znodes */<br>
> #define FOR_EACH_ZNODE(parent, path, strs) \<br>
> @@ -506,6 +521,7 @@ static void zk_watcher(zhandle_t *zh, int type, int state, const char *path,<br>
> {<br>
> struct zk_node znode;<br>
> char str[MAX_NODE_STR_LEN], *p;<br>
> + uint64_t lock_id;<br>
> int ret;<br>
><br>
> if (type == ZOO_SESSION_EVENT && state == ZOO_EXPIRED_SESSION_STATE) {<br>
> @@ -528,6 +544,14 @@ static void zk_watcher(zhandle_t *zh, int type, int state, const char *path,<br>
> } else if (type == ZOO_DELETED_EVENT) {<br>
> struct zk_node *n;<br>
><br>
> + /* process distributed lock */<br>
> + ret = sscanf(path, LOCK_ZNODE "/%lu/%s", &lock_id, str);<br>
> + if (ret == 2 && lock_id < MAX_MUTEX_NR &&<br>
> + dlock_array && dlock_array[lock_id]) {<br>
> + pthread_mutex_unlock(&(dlock_array[lock_id]->wait));<br>
> + sd_debug("release lock %lu %s", lock_id, str);<br>
> + }<br>
> +<br>
> ret = sscanf(path, MASTER_ZNONE "/%s", str);<br>
> if (ret == 1) {<br>
> zk_compete_master();<br>
> @@ -1058,6 +1082,108 @@ kick_block_event:<br>
> kick_block_event();<br>
> }<br>
><br>
> +static int zk_init_lock(struct cluster_lock *dlock, uint64_t lock_id)<br>
<br>
</div></div>rename dlock as clock for better uniformity.<br></blockquote><div><br></div><div>"clock" will be more confused than "dlock"....Never mind, I will use "cluster_lock" which is more clearly. </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
> +{<br>
> + int rc = 0;<br>
> + char path[MAX_NODE_STR_LEN];<br>
> +<br>
> + if (lock_id > MAX_MUTEX_NR) {<br>
<br>
</div>If we constraint lock_id into [0, MAX_MUTEX_NR] then uint64_t is meaningless.<br>
IIUC, lock_id is the unique identifier represent which lock is under operation. This<br>
means we can use vid + number to represent which lock the code is trying to hold.<br>
But with above check, apparently vid + number > MAX_MUTEX_NR as always, no? So<br>
it seems that different clients can't simply use 'vid+xxx' to represent a lock<br>
without extra communication.<br></blockquote><div><br></div><div>I think it need to change dlock_array to hash table to support whole uint64_t lock_id.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Thanks<br>
<span class="HOEnZb"><font color="#888888">Yuan<br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br>--<br>Best Regard<br>Robin Dong
</div></div>