[sheepdog] [RFC PATCH] sdnet: threading tx/rx process
Liu Yuan
namei.unix at gmail.com
Mon Jul 16 08:36:18 CEST 2012
On 07/16/2012 02:24 PM, Christoph Hellwig wrote:
> On Mon, Jul 16, 2012 at 02:14:24PM +0800, Liu Yuan wrote:
>> On 07/16/2012 02:09 PM, Christoph Hellwig wrote:
>>> The threads doing the non-blocking network I/O won't do disk I/O, or do
>>> I misread the code?
>>
>> For a running sheep, 99%+ of requests aren't from VMs or peer sheep that
>> try to do disk IO? I mean, after rx, we'll do a disk IO, then tx. What I
>> am trying to do is to overlap those rx/tx as much as possible: 1) only
>> do notification in main thread and 2) offload network IO in threads.
>
> All disk I/O is offloaded to the I/O workqueue, not the RX/TX threads,
> and when I read through your patch I didn't remember this being changed.
>
> Maybe we should go back and try to write down / draw the threading
> scheme more high-level.
>
> Currently it is:
>
> main thread I/O thread
>
> epoll(listenfd)
> accept
> epoll(iofd) / rx
> read/write
> epoll(iofd) / tx
>
> My suggestions was to move to:
>
>
> main thread RX thread TX thread I/O thread
>
> epoll(listenfd)
> accept
> epoll(iofd) / rx
> read/write
> epoll(iofd) / tx
>
Oh, I thought you meant following:
main thread connection thread1 ... connection threadX
epoll(listenfd)
accept
epoll(iofd1) epoll(iofdX)
rx rx
read/write read/write
tx tx
epoll(iofd1) epoll(iofdX)
Well, so in your proposal, per connection we have 2
threads(rx/tx),right? Still too many connections. Yes, we can avoid
context switches, but I suspect that context switches doesn't count much
because we'll do a disk IO in a perspective of one request. To
conclusion, I think we can afford 2 more context switches. (We can
optimize this by redesign thread implementation later)
> Both posted proposals seem to be variants of:
>
>
> main thread socket threads I/O thread
>
> epoll(listenfd)
> accept
> epoll(iofd)
> rx
> read/write
> epoll(iofd)
> tx
>
>
> Correct me if I'm wrong?
>
Yes, your understand of our proposal is correct.
Thanks,
Yuan
More information about the sheepdog
mailing list