[stgt] [PATCH RFC] tgtd: send/recv iSCSI PDUs in worker threads

Hitoshi Mitake mitake.hitoshi at gmail.com
Sun Nov 10 09:39:07 CET 2013


At Sun, 10 Nov 2013 13:30:00 +0900,
Hitoshi Mitake wrote:
> 
> At Thu, 7 Nov 2013 16:11:17 +0900,
> FUJITA Tomonori wrote:
> > 
> > On Wed, 06 Nov 2013 10:04:42 +0900
> > Hitoshi Mitake <mitake.hitoshi at gmail.com> wrote:
> > 
> > > At Tue, 05 Nov 2013 23:28:45 +0900 (JST),
> > > FUJITA Tomonori wrote:
> > > > 
> > > > On Mon,  4 Nov 2013 14:13:16 +0900
> > > > Hitoshi Mitake <mitake.hitoshi at gmail.com> wrote:
> > > > 
> > > > > From: Hitoshi Mitake <mitake.hitoshi at lab.ntt.co.jp>
> > > > > 
> > > > > Current tgtd sends and receives iSCSI PDUs in its main event
> > > > > loop. This design can cause bottleneck when many iSCSI clients connect
> > > > > to single tgtd process. For example, we need multiple tgtd processes
> > > > > for utilizing fast network like 10 GbE because typical single
> > > > > processor core isn't fast enough for processing bunch of requests.
> > > > > 
> > > > > This patch lets tgtd send/receive iSCSI PDUs and check digests in its
> > > > > worker threads. After applying this patch, the bottleneck in the main
> > > > > event loop is removed and the performance is improved.
> > > > > 
> > > > > The improvement can be seen even if tgtd and iSCSI initiator are
> > > > > running on a single host. Below is a snippet of fio result on my
> > > > > laptop. The workload is 128MB random RW. Backingstore is sheepdog.
> > > > > 
> > > > > original tgtd:
> > > > >   read : io=65392KB, bw=4445.2KB/s, iops=1111, runt= 14711msec
> > > > >   write: io=65680KB, bw=4464.8KB/s, iops=1116, runt= 14711msec
> > > > > 
> > > > > tgtd with this patch:
> > > > >   read : io=65392KB, bw=5098.9KB/s, iops=1274, runt= 12825msec
> > > > >   write: io=65680KB, bw=5121.3KB/s, iops=1280, runt= 12825msec
> > > > > 
> > > > > This change will be more effective when a number of iSCSI clients
> > > > > increases. I'd like to hear your comments on this change.
> > > > > 
> > > > > Signed-off-by: Hitoshi Mitake <mitake.hitoshi at lab.ntt.co.jp>
> > > > > ---
> > > > >  usr/iscsi/iscsi_tcp.c | 291 +++++++++++++++++++++++++++++++++++++++++++++++---
> > > > >  usr/iscsi/iscsid.c    |  61 +++++++----
> > > > >  usr/iscsi/iscsid.h    |   4 +
> > > > >  3 files changed, 322 insertions(+), 34 deletions(-)
> > > > 
> > > > This change doesn't affect our complicated logic to handle outstanding
> > > > commands with tcp disconnection (e.g. conn_close() in conn.c)?
> > > 
> > > I think it doesn't affect the closing logic. Because any procedure
> > > other than send/recv and digest checking are not delegated to worker
> > > threads. Task queuing, connection closing, etc are done in main
> > > thread even now.
> > 
> > I've not read the patch but conn_close() has the code to handle a
> > response to be being sent when a tcp connection is closed.
> 
> Sorry, the above description is not correct. As you say, some
> send/recv would be done in the main event loop
> (e.g. iscsi_free_cmd_task()). But it wouldn't matter because
> the logic of connection closing is preserved, and connection closing
> isn't an event which affects performance of tgtd.

On the second thought, this patch cannot handle the case when
conn_close() doesn't release a connection. As you say, this would
conflict with the closing logic. I'll fix it and send v2 later.

Thanks,
Hitoshi

--
To unsubscribe from this list: send the line "unsubscribe stgt" in
the body of a message to majordomo at vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



More information about the stgt mailing list