[stgt] read only access on 1 LUN for multiple initiators

Kapetanakis Giannis bilias at edu.physics.uoc.gr
Mon Mar 1 00:10:07 CET 2010


On 28/2/2010 22:11, Tomasz Chmielewski wrote:
> On 28.02.2010 20:55, Pasi Kärkkäinen wrote:
>> This will be difficult. Normal filesystems (ext2, ext3, ext4, xfs, etc)
>> are designed to be in use only on one node, so they don't have 
>> cluster-aware
>> locking, cache flushing etc..
>>
>> Each server (initiator) will have its own linux kernel cache, so I bet
>> your setup won't work very easily.. unless there's some way to 
>> completely
>> disable all caching in the initiator kernels.
>>
>> Basicly the problem is the initiators are not aware of the changes made
>> to the filesystem, since the changes are made on other systems.
>
> Exactly this is what happens.
>
> You can workaround this by dropping caches on all sides:
>
> echo 3 > /proc/sys/vm/drop_caches
>
> But generally, this is not recommended - use a distributed filesystem 
> like GFS, or NFS, or glusterfs instead.

Thanks both for your answers.

Indeed this is my problem. The initiator is not aware of any changes.
Locking does seem a problem to me since it will be read-only access.
The server (target) will be the only one to write.

I will try this workaround `echo 3 > /proc/sys/vm/drop_caches` on the 
initiators.
After all they are web/ftp mirrors serving static data.
The only problem is that those data change (rsync) every night...

Could it be any race condition if the client is serving data while they are
flushed from cache?

Any other internal iscsi way to make the initiator sync?

regards,

Giannis
ps. GFS seems more trusted senario...
--
To unsubscribe from this list: send the line "unsubscribe stgt" in
the body of a message to majordomo at vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



More information about the stgt mailing list