[stgt] [Scst-devel] [Iscsitarget-devel] ISCSI-SCST performance (with also IET and STGT data)

Vladislav Bolkhovitin vst at vlnb.net
Thu Apr 2 17:36:26 CEST 2009


Ross S. W. Walker, on 04/02/2009 06:06 PM wrote:
> Vladislav Bolkhovitin wrote:
>> Vladislav Bolkhovitin, on 04/02/2009 11:38 AM wrote:
>>> James Bottomley, on 04/02/2009 12:23 AM wrote:
>>>> SCST explicitly fiddles with the io context to get this to happen.  It
>>>> has a hack to block to export alloc_io_context:
>>>>
>>>> http://marc.info/?t=122893564800003
>>> Correct, although I wouldn't call it "fiddle", rather "grouping" ;)
> 
> Call it what you like,
> 
> Vladislav Bolkhovitin wrote:
>> Ross S. W. Walker, on 03/30/2009 10:33 PM wrote:
>>
>> I would be interested in knowing how your code defeats CFQ's extremely
>> high latency? Does your code reach into the io scheduler too? If not,
>> some code hints would be great.
> 
> Hmm, CFQ doesn't have any extra processing latency, especially 
> "extremely", hence there is nothing to defeat. If it had, how could it 
> been chosen as the default?
> 
> ----------
> List:       linux-scsi
> Subject:    [PATCH][RFC 13/23]: Export of alloc_io_context() function
> From:       Vladislav Bolkhovitin <vst () vlnb ! net>
> Date:       2008-12-10 18:49:19
> Message-ID: 49400F2F.4050603 () vlnb ! net
> 
> This patch exports alloc_io_context() function. For performance reasons 
> SCST queues commands using a pool of IO threads. It is considerably 
> better for performance (>30% increase on sequential reads) if threads in 
>   a pool have the same IO context. Since SCST can be built as a module, 
> it needs alloc_io_context() function exported.
> 
> <snip>
> ----------
> 
> I call that lying.
> 
>>> But that's not the only reason for good performance. Particularly, it 
>>> can't explain Bart's tmpfs results from the previous message, where the 
>>> majority of I/O done to/from RAM without any I/O scheduler involved. (Or 
>>> does I/O scheduler also involved with tmpfs?) Bart has 4GB RAM, if I 
>>> remember correctly, i.e. the test data set was 25% of RAM.
>> To remove any suspicions that I'm playing dirty games here I should note 
> <snip>
> 
> I don't know what games your playing at, but do me a favor, if your too
> stupid enough to realize when your caught in a lie and to just shut up
> then please do me the favor and leave me out of any further correspondence
> from you.

Think what you want and do what you want. You can even filter out all 
e-mails from me, that's your right. But:

1. As I wrote grouping threads into a single IO context doesn't explain 
all the performance difference and finding out reasons for other's 
performance problems isn't something I can afford at the moment.

2. CFQ doesn't have any processing latency and has never had. Learn to 
understand what are your writing about and how to correctly express 
yourself at first. You asked about that latency and I replied that there 
is nothing to defeat.

3. SCST doesn't have any hooks into CFQ and not going to have in the 
considerable future.

> Thank you,
> 
> -Ross

--
To unsubscribe from this list: send the line "unsubscribe stgt" in
the body of a message to majordomo at vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



More information about the stgt mailing list