[sheepdog] [PATCH] sheep: refactor object cache

Liu Yuan namei.unix at gmail.com
Wed Dec 25 06:03:57 CET 2013


On Wed, Dec 25, 2013 at 01:55:55PM +0900, Hitoshi Mitake wrote:
> At Tue, 24 Dec 2013 16:58:04 +0800,
> Liu Yuan wrote:
> > 
> > On Tue, Dec 24, 2013 at 11:11:58AM +0900, Hitoshi Mitake wrote:
> > 
> > 'refactor object cache' title is so big. Please write what your patches actually
> > did.
> > 
> > > This patch removes a confusing variable "in_push" of object cache. The
> > > most important change is removing a usleep() wait based on the
> > > variable.
> > > 
> > > Signed-off-by: Hitoshi Mitake <mitake.hitoshi at lab.ntt.co.jp>
> > > ---
> > >  sheep/object_cache.c |   19 +++++++++----------
> > >  1 file changed, 9 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/sheep/object_cache.c b/sheep/object_cache.c
> > > index 10a051c..546f781 100644
> > > --- a/sheep/object_cache.c
> > > +++ b/sheep/object_cache.c
> > > @@ -61,7 +61,7 @@ struct object_cache {
> > >  	struct list_head lru_head; /* Per VDI LRU list for reclaimer */
> > >  	struct list_head dirty_head; /* Dirty objects linked to this list */
> > >  	int push_efd; /* Used to synchronize between pusher and push threads */
> > > -	uatomic_bool in_push; /* Whether if pusher is running */
> > > +	pthread_mutex_t push_mutex; /* mutex for pushing cache */
> > >  
> > >  	struct sd_lock lock; /* Cache lock */
> > >  };
> > > @@ -224,11 +224,11 @@ static void do_background_push(struct work *work)
> > >  	struct push_work *pw = container_of(work, struct push_work, work);
> > >  	struct object_cache *oc = pw->oc;
> > >  
> > > -	if (!uatomic_set_true(&oc->in_push))
> > > +	if (pthread_mutex_trylock(&oc->push_mutex) == EBUSY)
> > >  		return;
> > >  
> > >  	object_cache_push(oc);
> > > -	uatomic_set_false(&oc->in_push);
> > > +	pthread_mutex_unlock(&oc->push_mutex);
> > >  }
> > >  
> > >  static void background_push_done(struct work *work)
> > > @@ -263,13 +263,11 @@ static void add_to_dirty_list(struct object_cache_entry *entry)
> > >  	list_add_tail(&entry->dirty_list, &oc->dirty_head);
> > >  	/* FIXME read sys->status atomically */
> > >  	if (uatomic_add_return(&oc->dirty_count, 1) > MAX_DIRTY_OBJECT_COUNT
> > > -	    && !uatomic_is_true(&oc->in_push)
> > >  	    && sys->cinfo.status == SD_STATUS_OK)
> > >  		kick_background_pusher(oc);
> > >  }
> > >  
> > > -static inline void
> > > -free_cache_entry(struct object_cache_entry *entry)
> > > +static inline void free_cache_entry(struct object_cache_entry *entry)
> > >  {
> > >  	struct object_cache *oc = entry->oc;
> > >  
> > > @@ -638,6 +636,8 @@ not_found:
> > >  
> > >  		sd_init_lock(&cache->lock);
> > >  		hlist_add_head(&cache->hash, head);
> > > +
> > > +		pthread_mutex_init(&cache->push_mutex, NULL);
> > >  	} else {
> > >  		cache = NULL;
> > >  	}
> > > @@ -1191,11 +1191,10 @@ int object_cache_flush_vdi(uint32_t vid)
> > >  	 * that dirty bits produced while it is waiting are guaranteed
> > >  	 * to be pushed back
> > >  	 */
> > > -	while (!uatomic_set_true(&cache->in_push))
> > > -		usleep(100000);
> > > -
> > > +	pthread_mutex_lock(&cache->push_mutex);
> > >  	ret = object_cache_push(cache);
> > > -	uatomic_set_false(&cache->in_push);
> > > +	pthread_mutex_unlock(&cache->push_mutex);
> > 
> > pthread_mutex_*** can fail sometimes, I'd like you add a sd_mutex helpers which
> > can check ret value like sd_lock and use it in this patch.
> 
> Replacing pthread_mutex_lock() would be a too big patch. From the
> perspective of backport, I want this patch to be pushed to master

Why you need to backport it in such a hurry? It doesn't fix any bug. With this
context, I think we have enough time to prepare patches that handle mutex
issue.

>
> now. I'll write a patch for the mutex wrapper before v0.8 as another
> topic (and it will be backported to v0.7.7, or v0.7.8).
> 
> Could you push with the title "sheep: refactor the mutex mechanism of
> pushing object cache"?

I'd suggest you title it more explictly what your patch does as:

sheep: use mutex instead of busy waiting for object pushing control


I'm afraid I can't because using raw pthread mutex calls are known to be buggy
and I can't merge a patch with known problem that is should be adressed before
handle. Yes, there are many prior patches with the same buggy raw mutex usage,
but at the time we weren't sesentive to posix mutex and I think it is good time
to address mutex before this patch.

Thanks
Yuan



More information about the sheepdog mailing list