[sheepdog] [PATCH v2 1/3] sheep: make {register, unregister}_event thread safe

Hitoshi Mitake mitake.hitoshi at gmail.com
Tue Dec 17 08:47:35 CET 2013


At Tue, 17 Dec 2013 15:41:31 +0800,
Liu Yuan wrote:
> 
> On Tue, Dec 17, 2013 at 04:33:58PM +0900, Hitoshi Mitake wrote:
> > At Tue, 17 Dec 2013 15:11:30 +0800,
> > Liu Yuan wrote:
> > > >
> > > > From the perspective of
> > > > maintenance and debugging, I cannot agree with this patch. Adding a
> > > > new workqueue for register/unregister in the main loop will save lots
> > > > of our time in the future.
> > > 
> > > I think this approach will not work for async request handling, e.g,
> > > 
> > > +       uatomic_inc(&iocb->count);
> > > +       req->iocb = iocb;
> > > +       register_event(req->local_req_efd, local_req_async_handler, req);
> > > +       submit_local_request(req);
> > > 
> > > I have to register the event before sumit_local_request().
> > > 
> > > If I don't and put registeration in a delegated thread, suppose that is called
> > > after request is already handled. So I'll miss the notification that the request
> > > is done and local_req_async_handler will never be called.
> > > 
> > > How do we handle above problem if we use your approach?
> > > 
> > > If we take performance into account, we can't do it in your approch either.
> > > Suppose we will issue 1000 requests in a async mode, we can't affort to be trapped
> > > to main thread to register event for every request.
> > 
> > I have an internal under construction change for parameterizing event
> > loop. Current event loop mechanism has a limitation that we can have only one
> > event loop per process. The change is for removing this limitation.
> > 
> > Making dedicated event loop which can be managed by worker threads would be
> > helpful for your problem.
> 
> This is a more ratical change compared to my patch and I think it really needs
> discusstion to do it.
> 
> > The change is originally intended for solving the problem that dog can use too
> > many memory area in some case (e.g. checking large VDIs)
> > 
> 
> I can't figure out why memory consumption is related to event loop. If you have
> mutliple loop, it doesn't mean will cost less memory for threads.
> 

Current dog uses a way like below:
1. produce every work
2. reduce every work with work_queue_wait()

This way clearly consumes too many memory space. For example, generating every
work for checking hypervolume VDI would consume too large area.

So we need a mechanism that limiting a number of work produced at a given point
in time and reduce them by the main loop as needed.

Thanks,
Hitoshi



More information about the sheepdog mailing list