<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">2013/11/29 Liu Yuan <span dir="ltr"><<a href="mailto:namei.unix@gmail.com" target="_blank">namei.unix@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
On Fri, Nov 29, 2013 at 06:35:11PM +0800, Robin Dong wrote:<br>
<br>
The patch title need updated, e.g, as "sheep/cluster: introduce distributed lock"<br>
<div class="im"><br>
> Implement the distributed lock by zookeeper (refer: <a href="http://zookeeper.apache.org/doc/trunk/recipes.html" target="_blank">http://zookeeper.apache.org/doc/trunk/recipes.html</a>)<br>
> The routine is:<br>
> 1. create a seq-ephemeral znode in lock directory (use lock-id as dir name)<br>
> 2. get smallest file path as owner of the lock; the other thread wait on a pthread_mutex_t<br>
> 3. if owner of the lock release it (or the owner is killed by accident), zookeeper will<br>
> trigger zk_watch() which will wake up all waiting threads to compete new owner of the lock<br>
><br>
> We add ->local_lock for dist_mutex to avoid many threads in one sheepdog daemon to create too many files<br>
> in a lock directory.<br>
<br>
</div>What exactly kind of scenario does this handle? It would be great if you can<br>
show us any use case or clear problem this ->local_lock solves. Unless there is<br>
a definite use case, I think we should avoid pre-mature optimization.<br>
<div><div class="h5"><br>
><br>
> v2 --> v3:<br>
> 1. change cluster interface to init_lock()/lock()/unlock()<br>
> 2. change 'struct zk_mutex' to 'struct cluster_lock'<br>
> 3. add empty implementation to local/corosync<br>
><br>
> v1 --> v2:<br>
> move code from sheep/http/lock.c into sheep/cluster/zookeeper.c using cluster framework<br>
><br>
> Signed-off-by: Robin Dong <<a href="mailto:sanbai@taobao.com">sanbai@taobao.com</a>><br>
> ---<br>
> sheep/cluster.h | 25 ++++++++<br>
> sheep/cluster/corosync.c | 16 +++++<br>
> sheep/cluster/local.c | 16 +++++<br>
> sheep/cluster/zookeeper.c | 140 ++++++++++++++++++++++++++++++++++++++++++++-<br>
> 4 files changed, 196 insertions(+), 1 deletions(-)<br>
><br>
> diff --git a/sheep/cluster.h b/sheep/cluster.h<br>
> index 81b5ae4..260573c 100644<br>
> --- a/sheep/cluster.h<br>
> +++ b/sheep/cluster.h<br>
> @@ -23,6 +23,13 @@<br>
> #include "sheep.h"<br>
> #include "config.h"<br>
><br>
> +struct cluster_lock {<br>
> + uint32_t id; /* id of this mutex */<br>
> + pthread_mutex_t wait;<br>
> + pthread_mutex_t local_lock;<br>
> + char ephemeral_path[MAX_NODE_STR_LEN];<br>
> +};<br>
<br>
</div></div>I suppose cluster_lock will be used throughout the sheep daemon, so this<br>
should be declared in include/sheep.h<br>
<div class="im"><br>
> /*<br>
> * maximum payload size sent in ->notify and ->unblock, it should be large<br>
> * enough to support COROSYNC_MAX_NODES * struct sd_node<br>
> @@ -109,6 +116,24 @@ struct cluster_driver {<br>
> int (*unblock)(void *msg, size_t msg_len);<br>
><br>
> /*<br>
> + * Init a distributed mutually exclusive lock to avoid race condition<br>
> + * when using swift interface to add/delete/list object.<br>
> + *<br>
> + * Returns SD_RES_XXX<br>
> + */<br>
<br>
</div>'swift stuff' should be removed from the comment. And I think you'd better<br>
describe id in the comment too.<br>
<div class="im"><br>
><br>
> + int (*init_lock)(struct cluster_lock *lock, uint32_t id);<br>
<br>
</div>uint32_t is supposed to be a per vdi lock, right? I am wondering if can solve the<br>
following case:<br>
<br>
(AG) Allocation Group<br>
<br>
vdi: AG1, AG2, AG3 ...<br>
<br>
Every AG need a lock to allow concurrent allocation and deallocation in the vdi<br>
so your lock can't hanlde it right?<br>
<br>
What about extend it as uint64_t, then we can use 'vdi + lock_id' to solve the<br>
above case.<br>
<div class="im"><br>
<br>
> + /*<br>
> + * Get the distributed lock.<br>
<br>
</div>Better as "Acquire the distributed lock" and we need more comment on a bit how<br>
it works, I'd suggest man pthread_mutex_lock as a reference.<br>
<div class="im"><br>
> + *<br>
> + * It will never return if it can't acquire the lock.<br>
> + */<br>
><br>
> + void (*lock)(struct cluster_lock *lock);<br>
> +<br>
> + /* Release the distributed lock. */<br>
<br>
</div>Ditto<br>
<div><div class="h5"><br>
> + void (*unlock)(struct cluster_lock *lock);<br>
> +<br>
> + /*<br>
> * Update the specific node in the driver's private copy of nodes<br>
> *<br>
> * Returns SD_RES_XXX<br>
> diff --git a/sheep/cluster/corosync.c b/sheep/cluster/corosync.c<br>
> index ea4421b..35c95d3 100644<br>
> --- a/sheep/cluster/corosync.c<br>
> +++ b/sheep/cluster/corosync.c<br>
> @@ -774,6 +774,19 @@ again:<br>
> return 0;<br>
> }<br>
><br>
> +static int corosync_init_lock(struct cluster_lock *dlock, uint32_t id)<br>
> +{<br>
> + return -1;<br>
> +}<br>
> +<br>
> +static void corosync_lock(struct cluster_lock *dlock)<br>
> +{<br>
> +}<br>
> +<br>
> +static void corosync_unlock(struct cluster_lock *dlock)<br>
> +{<br>
> +}<br>
> +<br>
> static int corosync_update_node(struct sd_node *node)<br>
> {<br>
> struct cpg_node cnode = this_node;<br>
> @@ -794,6 +807,9 @@ static struct cluster_driver cdrv_corosync = {<br>
> .notify = corosync_notify,<br>
> .block = corosync_block,<br>
> .unblock = corosync_unblock,<br>
> + .init_lock = corosync_init_lock,<br>
> + .lock = corosync_lock,<br>
> + .unlock = corosync_unlock,<br>
> .update_node = corosync_update_node,<br>
> };<br>
><br>
> diff --git a/sheep/cluster/local.c b/sheep/cluster/local.c<br>
> index b8cbb5c..d71cd20 100644<br>
> --- a/sheep/cluster/local.c<br>
> +++ b/sheep/cluster/local.c<br>
> @@ -547,6 +547,19 @@ static int local_init(const char *option)<br>
> return 0;<br>
> }<br>
><br>
> +static int local_init_lock(struct cluster_lock *dlock, uint32_t id)<br>
> +{<br>
> + return -1;<br>
> +}<br>
> +<br>
> +static void local_lock(struct cluster_lock *dlock)<br>
> +{<br>
> +}<br>
> +<br>
> +static void local_unlock(struct cluster_lock *dlock)<br>
> +{<br>
> +}<br>
> +<br>
> static int local_update_node(struct sd_node *node)<br>
> {<br>
> struct local_node lnode = this_node;<br>
> @@ -566,6 +579,9 @@ static struct cluster_driver cdrv_local = {<br>
> .notify = local_notify,<br>
> .block = local_block,<br>
> .unblock = local_unblock,<br>
> + .init_lock = local_init_lock,<br>
> + .lock = local_lock,<br>
> + .unlock = local_unlock,<br>
> .update_node = local_update_node,<br>
> };<br>
><br>
> diff --git a/sheep/cluster/zookeeper.c b/sheep/cluster/zookeeper.c<br>
> index fa89c46..fc6c207 100644<br>
> --- a/sheep/cluster/zookeeper.c<br>
> +++ b/sheep/cluster/zookeeper.c<br>
> @@ -30,6 +30,21 @@<br>
> #define QUEUE_ZNODE BASE_ZNODE "/queue"<br>
> #define MEMBER_ZNODE BASE_ZNODE "/member"<br>
> #define MASTER_ZNONE BASE_ZNODE "/master"<br>
> +#define LOCK_ZNODE BASE_ZNODE "/lock"<br>
> +<br>
> +#define MAX_MUTEX_NR 4096<br>
> +#define WAIT_TIME 1 /* second */<br>
> +<br>
> +static struct cluster_lock **dlock_array;<br>
> +<br>
> +/*<br>
> + * Wait a while when create, delete or get_children fail on<br>
> + * zookeeper lock so it will not print too much loop log<br>
> + */<br>
> +static void zk_wait(void)<br>
> +{<br>
> + sleep(WAIT_TIME);<br>
> +}<br>
><br>
> /* iterate child znodes */<br>
> #define FOR_EACH_ZNODE(parent, path, strs) \<br>
> @@ -506,6 +521,7 @@ static void zk_watcher(zhandle_t *zh, int type, int state, const char *path,<br>
> {<br>
> struct zk_node znode;<br>
> char str[MAX_NODE_STR_LEN], *p;<br>
> + uint32_t lock_id;<br>
> int ret;<br>
><br>
> if (type == ZOO_SESSION_EVENT && state == ZOO_EXPIRED_SESSION_STATE) {<br>
> @@ -528,6 +544,14 @@ static void zk_watcher(zhandle_t *zh, int type, int state, const char *path,<br>
> } else if (type == ZOO_DELETED_EVENT) {<br>
> struct zk_node *n;<br>
><br>
> + /* process distributed lock */<br>
> + ret = sscanf(path, LOCK_ZNODE "/%u/%s", &lock_id, str);<br>
> + if (ret == 2 && lock_id < MAX_MUTEX_NR &&<br>
> + dlock_array && dlock_array[lock_id]) {<br>
> + pthread_mutex_unlock(&(dlock_array[lock_id]->wait));<br>
> + sd_debug("release lock %u %s", lock_id, str);<br>
> + }<br>
> +<br>
> ret = sscanf(path, MASTER_ZNONE "/%s", str);<br>
> if (ret == 1) {<br>
> zk_compete_master();<br>
> @@ -1058,6 +1082,108 @@ kick_block_event:<br>
> kick_block_event();<br>
> }<br>
><br>
> +static int zk_init_lock(struct cluster_lock *dlock, uint32_t id)<br>
> +{<br>
> + int rc;<br>
> + char path[MAX_NODE_STR_LEN];<br>
> +<br>
> + if (id > MAX_MUTEX_NR) {<br>
> + sd_err("lock-id is too large!");<br>
> + rc = -1;<br>
> + goto err;<br>
> + }<br>
> +<br>
> + dlock_array[id] = dlock;<br>
> + dlock->id = id;<br>
> + snprintf(path, MAX_NODE_STR_LEN, LOCK_ZNODE "/%u", dlock->id);<br>
> + rc = zk_init_node(path);<br>
> + if (rc)<br>
> + goto err;<br>
> +<br>
> + rc = pthread_mutex_init(&dlock->wait, NULL);<br>
> + if (rc) {<br>
> + sd_err("failed to init dlock->wait");<br>
> + goto err;<br>
> + }<br>
> +<br>
> + rc = pthread_mutex_init(&dlock->local_lock, NULL);<br>
> + if (rc) {<br>
> + sd_err("failed to init dlock->local_lock");<br>
> + goto err;<br>
> + }<br>
> +<br>
> + return 0;<br>
> +err:<br>
> + dlock_array[id] = NULL;<br>
> + return rc;<br>
> +}<br>
> +<br>
> +static void zk_lock(struct cluster_lock *dlock)<br>
> +{<br>
> + int flags = ZOO_SEQUENCE | ZOO_EPHEMERAL;<br>
> + int rc, len = MAX_NODE_STR_LEN;<br>
> + char *my_path;<br>
> + char parent[MAX_NODE_STR_LEN];<br>
> + char lowest_seq_path[MAX_NODE_STR_LEN];<br>
> + char owner_name[MAX_NODE_STR_LEN];<br>
> +<br>
> + /*<br>
> + * if many threads use locks with same id, we should use<br>
> + * ->local_lock to avoid the only zookeeper handler to<br>
> + * create many seq-ephemeral files.<br>
> + */<br>
> + pthread_mutex_lock(&dlock->local_lock);<br>
> +<br>
> + my_path = dlock->ephemeral_path;<br>
> +<br>
> + /* compete owner of lock is just like zk_compete_master() */<br>
<br>
</div></div>Though this is not relevent to this patch, I am wondering if zk_compete_master<br>
can make use of your proposed lock?<br></blockquote><div><br></div><div>zk_compete_master() could return without success, but zk_lock() will return only when it</div><div>has acquired the lock. So the routine of the two function is very different. And, I have used</div>
<div>zk_get_least_seq() as the common code of the two functions.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
Thanks<br>
<span class=""><font color="#888888">Yuan<br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br>--<br>Best Regard<br>Robin Dong
</div></div>