[stgt] read-caching by tgtd

ronnie sahlberg ronniesahlberg at gmail.com
Thu May 16 18:18:12 CEST 2013


Hi,

There are no caches in TGT so TGT does not do any caching, no read
cache, no write cache.
It just passes everything directly to the underlying storage using
either pread/pwrite for backing files
or SG_IO for passthrough devices.

So if you see caching effects, that caching must be in the underlying device.


The parametrs you changed :
write-cache off
read-cache off

These parameters only affect what we set in the modepage for caching
they do not affect TGT at all since tgt does not look at them (it is
just a blob that we send back to the client when it asks for the
caching mode page.)
I.e.  TGT never does any caching,  but you can use these these
settings to control what will tgt tell the client if the client asks
about "do you do caching?"

As such, while the settings dont affect TGT,  they cna have an effect
on the initiator since it may do I/O differently depending on whether
it thinks the target is using caching or not.



As Maurits replied to you, it is probably better to use active/passive
failover anyway.

On Thu, May 16, 2013 at 2:18 AM, Arne Van Theemsche <arnevt at gmail.com> wrote:
> Hi there
>
> I have following setup
>
> 2 identical target-machines consisting of
>
> HW raid controller
> LVM on top of it
> DRBD on top of LV (dual primary, don't panic yet)
> exported-as-target
>
>
>
> both target are imported by 1 initiator, making a multipath device of
> it (in Active/Backup), so no load balancing
>
> on that device I make an xfs FS
>
> The reason for this setup is to have HA if one target fails, the other
> takes over (after 20s timeout)
>
> Everything keeps going fine, until the initiator machines fails/reboot
> and the multipath choose at boottime another path (in other words:
> choose the other target as primary).
>
> to make a long story short: It seems that the tgtd process does some
> form of read caching, because if I disable the multipathing (ending up
> with 2 accessable block devices) and mount them one by one (not
> together off course) the FS structure is not in sync. To exclude the
> drbd process as the guilty one, I mount the FS on both target devices
> (using the drbd block device) and both targets give the right FS
> structure. It's only one of the two exported targets that give the
> wrong info, so the only building block left over is the tgtd-process
> doing some form of caching. As a final test, if I reboot the
> initiator, the wrong info keeps comming back from the target
>
> in the <target> section I put
>    write-cache off
>    read-cache off
>
> the write-cache seems to have effect, but the read-cache seems silently ignored.
>
> Any advice?
>
> kind regards
> --
> To unsubscribe from this list: send the line "unsubscribe stgt" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe stgt" in
the body of a message to majordomo at vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



More information about the stgt mailing list