[sheepdog] [PATCH v1] sheep/http: fix error in bucket_delete

Liu Yuan namei.unix at gmail.com
Wed Aug 6 10:28:13 CEST 2014


On Wed, Aug 06, 2014 at 04:17:10PM +0800, Liu Yuan wrote:
> On Tue, Aug 05, 2014 at 08:33:46PM +0800, Bingpeng Zhu wrote:
> > From: NankaiZBP <nkuzbp at foxmail.com>
> > 
> > In current implementation, When we create a bucket, we decide
> > the bnode's location in account VDI using sd_hash(bucket_name)
> > as key. We handle hash conflict by linear probing hash table.
> > Here is the bug:
> > When we delete a bucket, we can't discard its bnode. Because
> > bnode_lookup() need it to find if some bucket exists or not
> > by checking adjacent bnodes. Therefore, we just zero its
> > bnode.name when client want to delete a bucket. When we create
> > a bucket later, we can reuse the deleted bnode if they hash to
> > the same location in account VDI.
> 
> No, bnode_lookup() will check every objects. What your said just applied to
> onode_lookup(). There is no bug as your described for bnode_lookup().

You can check this patch for why onode lookup was reworked as is now.

commit 0e86d5b0afa12a4ac580bdeae15aa48d7a0a7727
Author: Liu Yuan <namei.unix at gmail.com>
Date:   Wed Dec 25 13:28:28 2013 +0800

    sheep/http: unify onode lookup/find operation

But I didn't apply the same algorithm for bnode lookup/create. I forgot why now,
but I could simply guess that reason is

- compared to user objects, bucket number will be very much small, so bnode_lookup
  is very effecient for this small scale of lookups because for hyper volume,
  there is no collision at all for most of the time.

So to conclude, you are not fixing any bug, instead you are proposing the same
algorithm as onode lookup for bnode.

For now bnode_looup is simpler and effecient, so I'd keep it as is now until you
prove it is not good enough with real numbers.

Thanks
Yuan



More information about the sheepdog mailing list