[stgt] [PATCH] nonblocking epoll_wait loop, sched events, ISER/IB polling
FUJITA Tomonori
fujita.tomonori at lab.ntt.co.jp
Sun Sep 28 11:58:28 CEST 2008
On Sun, 28 Sep 2008 12:38:35 +0300
Alexander Nezhinsky <nezhinsky at gmail.com> wrote:
> This patch introduces a few interconnected improvements, that
> ultimately lead to significant reduction in interrupts rate for
> iser/ib, while adding flexibility to tgtd event processing scheme.
>
> First, it implements a kind of "scheduler" list, to replace the
> counter events. Event handlers can schedule other events that are
> placed on the scheduler's loop. This has some small advantages in
> itself, because the same event descriptor is used for all events
> in the system, events are executed in order they'd been scheduled,
> they can be removed from list, few instances of the same event
> may be scheduled.
>
> But the most important change is the logic of the processing loop,
> as a whole. It goes as follows:
>
> The scheduler checks events on the list, does their processing,
> which can schedule more items, but does not process the new items
> (in order to avoid infinite/long loops).
>
> If after processing all "old" events, some "new" ones were scheduled,
> epoll_wait() is executed in non-blocking manner. This guarantees
> that the file descriptors that became ready during the scheduler
> processing are handled, but if there no ready fds, the remaining
> scheduler events are processed immediately.
>
> But if after processing the scheduler list nothing new is added,
> then epoll_wait is blocking as in the current scheme.
>
> This way we can be very flexible, because event handlers and deferred
> work can not block one another. Potentially long handlers can be
> split into shorter ones easily without blocking the entire target.
>
> Finally, IB completion queue is the first guy who benefits, because
> we can postpone interrupt re-arming, until no completion entries
> remain, and to schedule CQ "draining" after all events are serviced.
>
> When a completion event is handled, CQ is polled so that up to a
> given number (now set to 8) of WCs are processed.
>
> If more completions are left on the CQ, essentially the same
> handler is scheduled, to carry out the next round of polling,
> but in the meanwhile, other events in the system can be serviced.
>
> If CQ is found empty, interrupts are re-armed and a handler is
> scheduled to consume the completions which could sneak in
> between the moment the CQ was seen empty and just before
> the interrupts were re-armed.
>
> Thus in iSER IB there is a marked interrupts rate reduction.
> Here are typical results:
> Current target: 62-65 KIOPS, ~110,000 interrupt/sec, CPU% ~68%
> Patched target: 65-70 KIOPS, ~65,000 interrupt/sec, CPU% ~65%
>
> Signed-off-by: Alexander Nezhinsky <nezhinsky at gmail.com>
Looks great performance improvement.
Applied, thanks a lot!
--
To unsubscribe from this list: send the line "unsubscribe stgt" in
the body of a message to majordomo at vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
More information about the stgt
mailing list