This is the third major release of support for iSCSI Extensions for RDMA (iSER) to the existing TGT user space SCSI target. It uses OpenFabrics libraries and kernel drivers to act as a SCSI target over RDMA-capable devices. The code has been tested against the existing Linux iSER initiator over InfiniBand cards, but should be specification compliant and work generally. A bit of documentation is included, and a short technical report is available at http://www.osc.edu/~pw/papers/iser-snapi07.pdf with slides from a presentation at http://www.osc.edu/~pw/papers/wyckoff-iser-snapi07-talk.pdf . The iSER patches can be downloaded from: git://git.osc.edu/tgt or browsed at: http://git.osc.edu/?p=tgt.git;a=summary Changes since previous series are as follows. Tomo redid substantial parts of tgt, including data buffer handling, and merged many of the easier patches from the previous iSER patches. This reduces the load from 20 patches down to 6 this time. (Not counting two little task->data ones recently posted.) There is not a lot of change in the remaining patches from the previous release, just some minor alterations to accommodate core tgt changes and to follow some suggestions from Tomo. Here's the summary: 1 iser docs - just doc/ 2 iser task transport data - need private data in struct task, could merge as is, or let iscsi transport do allocation as with connection struct. 3 iser rounding - removed "if (conn->tp->rdma)" approach, now uses "conn->tp->data_padding". Also deleted some roundup()s that were unnecessary in both cases. 4 iser params - new parameters for iSER straight from spec docs. No ifdef on ISCSI_RDMA, we let initiator say if it wants RDMA or not. 5 iser iscsid changes - five little "if (conn->tp->rdma)" changes, all necessary due to protocol differences and different event handler 6 iser core - add iscsi/iscsi_rdma.c and hooks to use it Performance has changed a bit for the worse in tgt core since the last release, unfortunately. Brief results from an unscientfic study, using the same kernel (2.6.23-rc1) but different userspace libraries (f8 vs f7). All ramdisk. Same tuning in all cases. GigE: I'm seeing about 5 MB/s less for both read and write. IPoIB: writes are unchanged but reads go up(!) from 140 to 180 MB/s. Must be libibverbs.so improvements. iSER: writes are a bit worse this time around, by maybe 50 MB/s or so, and reads lose about 100 MB/s for some transfer sizes. Still 400 MB/s-ish for writes but now only 500 MB/s-ish for reads. Hopefully none of this shows up when using real disks. I'll look at what causes these performance changes and try out some more recent kernels. But I wanted to get the patches to people so as not to hold up any other work. Would love to see some more scientific numbers from people, and help figuring out where the time is going. -- Pete |