[sheepdog] Redundancy policy via iSCSI

Saeki Masaki saeki.masaki at po.ntts.co.jp
Tue Feb 3 09:53:59 CET 2015


Hi Hu,

Since Sheepdog has a mechanism that does not place objects in the same zone_id.
Can you try to change ZONE id in each node.
>>>    Id   Host:Port         V-Nodes       Zone
>>>     0   130.1.0.147:7000    	128          0
>>>     1   130.1.0.148:7000    	128          0
>>>     2   130.1.0.149:7000    	128          0

Best Regards, Saeki.

On 2015/02/03 17:09, Hitoshi Mitake wrote:
> At Tue, 03 Feb 2015 17:02:24 +0900,
> Hitoshi Mitake wrote:
>>
>>
>> Hi Hu,
>> Thanks for your report!
>>
>> At Tue, 3 Feb 2015 15:52:12 +0800,
>> hujianyang wrote:
>>>
>>> Hi Hitoshi,
>>>
>>> Sorry for disturb.
>>>
>>> I'm testing redundancy policy of sheepdog via iSCSI. I think
>>> if I create a 1G v-disk, the total space cost of this device
>>> should be 3*1G under a 3 copies policy. But after tests, I
>>> find the cost of this device is only 1G. Seems no additional
>>> copy is created.
>>>
>>> I don't know what happened. I'd like to should my configurations
>>> and wish you could take some time to help me. Many thanks!
>>>
>>>
>>> linux-rme9:/mnt # dog cluster info
>>> Cluster status: running, auto-recovery enabled
>>>
>>> Cluster created at Tue Feb  3 23:07:13 2015
>>>
>>> Epoch Time           Version [Host:Port:V-Nodes,,,]
>>> 2015-02-03 23:07:13      1 [130.1.0.147:7000:128, 130.1.0.148:7000:128, 130.1.0.149:7000:128]
>>> linux-rme9:/mnt # dog node list
>>>    Id   Host:Port         V-Nodes       Zone
>>>     0   130.1.0.147:7000    	128          0
>>>     1   130.1.0.148:7000    	128          0
>>>     2   130.1.0.149:7000    	128          0
>>> linux-rme9:/mnt # dog vdi list
>>>    Name        Id    Size    Used  Shared    Creation time   VDI id  Copies  Tag   Block Size Shift
>>>    Hu0          0  1.0 GB  1.0 GB  0.0 MB 2015-02-03 23:12   6e7762      3                22
>>> linux-rme9:/mnt # dog node info
>>> Id	Size	Used	Avail	Use%
>>>   0	261 GB	368 MB	260 GB	  0%
>>>   1	261 GB	336 MB	261 GB	  0%
>>>   2	261 GB	320 MB	261 GB	  0%
>>> Total	783 GB	1.0 GB	782 GB	  0%
>>>
>>> Total virtual image size	1.0 GB
>>>
>>> linux-rme9:/mnt # tgtadm --op show --mode target
>>> Target 1: iqn.2015.01.org.sheepdog
>>>      System information:
>>>          Driver: iscsi
>>>          State: ready
>>>      I_T nexus information:
>>>          I_T nexus: 3
>>>              Initiator: iqn.1996-04.de.suse:01:23a8f73738e7 alias: Fs-Server
>>>              Connection: 0
>>>                  IP Address: 130.1.0.10
>>>      LUN information:
>>>          LUN: 0
>>>              Type: controller
>>>              SCSI ID: IET     00010000
>>>              SCSI SN: beaf10
>>>              Size: 0 MB, Block size: 1
>>>              Online: Yes
>>>              Removable media: No
>>>              Prevent removal: No
>>>              Readonly: No
>>>              SWP: No
>>>              Thin-provisioning: No
>>>              Backing store type: null
>>>              Backing store path: None
>>>              Backing store flags:
>>>          LUN: 1
>>>              Type: disk
>>>              SCSI ID: IET     00010001
>>>              SCSI SN: beaf11
>>>              Size: 1074 MB, Block size: 512
>>>              Online: Yes
>>>              Removable media: No
>>>              Prevent removal: No
>>>              Readonly: No
>>>              SWP: No
>>>              Thin-provisioning: No
>>>              Backing store type: sheepdog
>>>              Backing store path: tcp:130.1.0.147:7000:Hu0
>>>              Backing store flags:
>>>      Account information:
>>>      ACL information:
>>>          ALL
>>>
>>>
>>> Client:
>>>   # iscsiadm -m node --targetname iqn.2015.01.org.sheepdog --portal 130.1.0.147:3260 --rescan
>>> Rescanning session [sid: 4, target: iqn.2015.01.org.sheepdog, portal: 130.1.0.147,3260]
>>>   # dd if=/dev/random of=/dev/sdg bs=2M
>>> dd: writing `/dev/sdg': No space left on device
>>> 0+13611539 records in
>>> 0+13611538 records out
>>> 1073741824 bytes (1.1 GB) copied, 956.511 s, 1.1 MB/s
>>
>> Hmm, seems strange. For diagnosing, I have some questions:
>>
>> 1. Can you see any error messages in log files of sheep?
>> 2. Could you provide lists of obj/ directories of sheep servers?
>> 3. Is this reproducible even you make file system on the iSCSI target
>>     and put data on the file system?
>> 4. Is this reproducible even you append oflag=sync to the option of dd?
>
> Additionaly, could you provide options of sheep?
>
> Thanks,
> Hitoshi
>
>>
>> Thanks,
>> Hitoshi
>>
>>>
>>> Thanks!
>>> Hu
>>>
>>> --
>>> sheepdog mailing list
>>> sheepdog at lists.wpkg.org
>>> https://lists.wpkg.org/mailman/listinfo/sheepdog





More information about the sheepdog mailing list