[sheepdog-users] [master branch] SIGABRT when doing: dog vdi check

Marcin Mirosław marcin at mejor.pl
Wed Jan 8 14:09:00 CET 2014


W dniu 08.01.2014 09:53, Liu Yuan pisze:
> On Wed, Jan 08, 2014 at 09:47:51AM +0100, Marcin Mirosław wrote:
>> W dniu 08.01.2014 07:21, Liu Yuan pisze:
>>> On Tue, Jan 07, 2014 at 03:40:44PM +0100, Marcin Mirosław wrote:
>>>> W dniu 07.01.2014 14:38, Liu Yuan pisze:
>>>>> On Tue, Jan 07, 2014 at 01:29:40PM +0100, Marcin Mirosław wrote:
>>>>>> W dniu 07.01.2014 12:50, Liu Yuan pisze:
>>>>>>> On Tue, Jan 07, 2014 at 11:14:09AM +0100, Marcin Mirosław wrote:
>>>>>>>> W dniu 07.01.2014 11:05, Liu Yuan pisze:
>>>>>>>>> On Tue, Jan 07, 2014 at 10:51:18AM +0100, Marcin Mirosław wrote:
>>>>>>>>>> W dniu 07.01.2014 03:00, Liu Yuan pisze:
>>>>>>>>>>> On Mon, Jan 06, 2014 at 05:38:41PM +0100, Marcin Mirosław wrote:
>>>>>>>>>>>> W dniu 2014-01-06 08:27, Liu Yuan pisze:
>>>>>>>>>>>>> On Sat, Jan 04, 2014 at 04:13:27PM +0100, Marcin Mirosław wrote:
>>>>>>>>>>>>>> W dniu 2014-01-04 06:28, Liu Yuan pisze:
>>>>>>>>>>>>>>> On Fri, Jan 03, 2014 at 10:51:26PM +0100, Marcin Mirosław wrote:
>>>>>>>>>>>>>>>> Hi!
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi all!
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I'm new on "sheep-run";) I'm starting to try sheepdog so probably
>>>>>>>>>>>>>>>> I'm doing many things wrongly. I'm playing with sheepdog-0.7.6.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> First problem (SIGABRT): I started multi sheep daemeon on
>>>>>>>>>>>>>>>> localhost: # for x in 0 1 2 3 4; do sheep -c local -j size=128M
>>>>>>>>>>>>>>>> -p 700$x /mnt/sheep/metadata/$x,/mnt/sheep/storage/$x; done
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Next: # dog cluster info Cluster status: Waiting for cluster to
>>>>>>>>>>>>>>>> be formatted
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> # dog cluster format -c 2:1
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 0.7.6 doesn't support erasure code. Try latest master branch
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Now I'm on 486ace8ccbb [master]. How I should check choosen redundancy?
>>>>>>>>>>>>>>  # cat /mnt/test/vdi/list
>>>>>>>>>>>>>>    Name        Id    Size    Used  Shared    Creation time   VDI id
>>>>>>>>>>>>>> Copies  Tag
>>>>>>>>>>>>>>    testowy      0  1.0 GB  0.0 MB  0.0 MB 2014-01-04 15:07   cac836     3
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Here I can see 3 copies, can't see info about how many parity strips
>>>>>>>>>>>>>> is configured. Probably this isn't implemented yet?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Not yet. But currently you can 'dog cluster info -s' to see the global policy
>>>>>>>>>>>>> scheme x:y (that you 'dog cluster format -c x:y').
>>>>>>>>>>>>>
>>>>>>>>>>>>> With erasure coding, 'copies' will have another meaning that the number of total
>>>>>>>>>>>>> data + parity objects. In your case, it is 2+1=3. But as you said, this is
>>>>>>>>>>>>> confusing, I think of adding a extra field to indicate redundancy scheme per vid.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Well, for about issue, I can't reproduce it. Could you give me more envronment
>>>>>>>>>>>>> information such as 32 or 64 bits of your OS? what is your distro?
>>>>>>>>>>>>
>>>>>>>>>>>> Hi!
>>>>>>>>>>>> I'm using Gentoo 64bits, gcc version 4.7.3 (Gentoo Hardened 4.7.3-r1
>>>>>>>>>>>> p1.4, pie-0.5.5), kernel 3.10 with Gentoo patches.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Does the problem still exist? I can't reproduce the issue yet. So how did you
>>>>>>>>>>> reproduce it step by step?
>>>>>>>>>>
>>>>>>>>>> Hi!
>>>>>>>>>> I'm installing sheepdog-0.7.x, next:
>>>>>>>>>> # mkdir -p /mnt/sheep/{metadata,storage}
>>>>>>>>>> # for x in 0 1 2 3 4; do sheep -c local -j size=128M -p 700$x
>>>>>>>>>> /mnt/sheep/metadata/$x,/mnt/sheep/storage/$x; done
>>>>>>>>>> # dog cluster format -c 2
>>>>>>>>>> using backend plain store
>>>>>>>>>> # dog vdi create testowy 5G
>>>>>>>>>> # dog  vdi check testowy
>>>>>>>>>> PANIC: can't find next new idx
>>>>>>>>>> dog exits unexpectedly (Aborted).
>>>>>>>>>> dog() [0x4058da]
>>>>>>>>>> [...]
>>>>>>>>>>
>>>>>>>>>> I'm getting SIGABRT on every try.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On the same machine, with master branch(not stable-0.7), you mentioned you can't
>>>>>>>>> reproduce the problem?
>>>>>>>>
>>>>>>>> With master branch (commit  a79e69f9ad9c5) I'm getting such message:
>>>>>>>> # dog  vdi check testowy
>>>>>>>> PANIC: can't find a valid vnode
>>>>>>>> dog exits unexpectedly (Aborted).
>>>>>>>> dog() [0x4057fa]
>>>>>>>> /lib64/libpthread.so.0(+0xfd8f) [0x7f6d43cd0d8f]
>>>>>>>> /lib64/libc.so.6(gsignal+0x38) [0x7f6d43951368]
>>>>>>>> /lib64/libc.so.6(abort+0x147) [0x7f6d439526c7]
>>>>>>>> dog() [0x40336e]
>>>>>>>> dog() [0x409d9f]
>>>>>>>> dog() [0x40cea5]
>>>>>>>> dog() [0x403927]
>>>>>>>> /lib64/libc.so.6(__libc_start_main+0xf4) [0x7f6d4393dc04]
>>>>>>>> dog() [0x403c6c]
>>>>>>>>
>>>>>>>> Will be full gdb backtrace usefull?
>>>>>>>
>>>>>>> Hmm, before you run 'dog vdi check', what is output of 'dog cluster info',
>>>>>>> 'dog node list', 'dog node md info --all'?
>>>>>>
>>>>>> Output using master branch:
>>>>>> # dog cluster info
>>>>>> Cluster status: running, auto-recovery enabled
>>>>>>
>>>>>> Cluster created at Tue Jan  7 13:21:53 2014
>>>>>>
>>>>>> Epoch Time           Version
>>>>>> 2014-01-07 13:21:54      1 [127.0.0.1:7000, 127.0.0.1:7001,
>>>>>> 127.0.0.1:7002, 127.0.0.1:7003, 127.0.0.1:7004]
>>>>>>
>>>>>> # dog node list
>>>>>>   Id   Host:Port         V-Nodes       Zone
>>>>>>    0   127.0.0.1:7000           128   16777343
>>>>>>    1   127.0.0.1:7001           128   16777343
>>>>>>    2   127.0.0.1:7002           128   16777343
>>>>>>    3   127.0.0.1:7003           128   16777343
>>>>>>    4   127.0.0.1:7004           128   16777343
>>>>>>
>>>>>> # dog node md info --all
>>>>>> Id      Size    Used    Avail   Use%    Path
>>>>>> Node 0:
>>>>>>  0      4.4 GB  4.0 MB  4.4 GB    0%    /mnt/sheep/storage/0
>>>>>> Node 1:
>>>>>>  0      4.4 GB  0.0 MB  4.4 GB    0%    /mnt/sheep/storage/1
>>>>>> Node 2:
>>>>>>  0      4.4 GB  0.0 MB  4.4 GB    0%    /mnt/sheep/storage/2
>>>>>> Node 3:
>>>>>>  0      4.4 GB  0.0 MB  4.4 GB    0%    /mnt/sheep/storage/3
>>>>>> Node 4:
>>>>>>  0      4.4 GB  0.0 MB  4.4 GB    0%    /mnt/sheep/storage/4
>>>>>>
>>>>>
>>>>> The very strange thing from your output is that only 1 copy was actually
>>>>> written while you execute 'dog vdi create', but you formated the cluster with
>>>>> two copy specified.
>>>>>
>>>>> You can verify this by
>>>>>
>>>>> ls /mnt/sheepdog/storage/*/
>>>>>
>>>>> I guess you can only see one object. Dunno why this happened.
>>>>
>>>> It is as you said:
>>>> # ls /mnt/sheep/storage/*/
>>>> /mnt/sheep/storage/0/:
>>>> 80cac83600000000
>>>>
>>>> /mnt/sheep/storage/1/:
>>>>
>>>> /mnt/sheep/storage/2/:
>>>>
>>>> /mnt/sheep/storage/3/:
>>>>
>>>> /mnt/sheep/storage/4/:
>>>>
>>>>
>>>> Now I'm on commit a79e69f9ad9c and problem still exists for me (in
>>>> contrary to 0.7-stable). I noticed that in my /tmp appeared file
>>>> "sheepdog_shm" and "lock" . Is it correct?
>>>>
> 
> lock isn't created by sheep daemon as far as I know. we create sheepdog_locks for
> local driver.
> 
>>>
>>> I suspect there is only actually one node in the cluster so 'vdi check' panic out.
>>>
>>> before you run 'vdi check'
>>>
>>> for i in `seq 0 5`;do dog cluster info -p 700$i;done
>>>
>>> is every node output same?
>>>
>>>
>>> for i in `seq 0 5`;do dog node list -p 700$i;done
>>>
>>> same too?
>>
>> Hi!
>> Output is looks as below:
>>
>> # for i in `seq 0 4`;do dog cluster info -p 700$i;done
>> Cluster status: running, auto-recovery enabled
>>
>> Cluster created at Wed Jan  8 09:42:40 2014
>>
>> Epoch Time           Version
>> 2014-01-08 09:42:41      1 [127.0.0.1:7000, 127.0.0.1:7001,
>> 127.0.0.1:7002, 127.0.0.1:7003, 127.0.0.1:7004]
>> Cluster status: running, auto-recovery enabled
>>
>> Cluster created at Wed Jan  8 09:42:40 2014
>>
>> Epoch Time           Version
>> 2014-01-08 09:42:40      1 [127.0.0.1:7000, 127.0.0.1:7001,
>> 127.0.0.1:7002, 127.0.0.1:7003, 127.0.0.1:7004]
>> Cluster status: running, auto-recovery enabled
>>
>> Cluster created at Wed Jan  8 09:42:40 2014
>>
>> Epoch Time           Version
>> 2014-01-08 09:42:41      1 [127.0.0.1:7000, 127.0.0.1:7001,
>> 127.0.0.1:7002, 127.0.0.1:7003, 127.0.0.1:7004]
>> Cluster status: running, auto-recovery enabled
>>
>> Cluster created at Wed Jan  8 09:42:40 2014
>>
>> Epoch Time           Version
>> 2014-01-08 09:42:40      1 [127.0.0.1:7000, 127.0.0.1:7001,
>> 127.0.0.1:7002, 127.0.0.1:7003, 127.0.0.1:7004]
>> Cluster status: running, auto-recovery enabled
>>
>> Cluster created at Wed Jan  8 09:42:40 2014
>>
>> Epoch Time           Version
>> 2014-01-08 09:42:40      1 [127.0.0.1:7000, 127.0.0.1:7001,
>> 127.0.0.1:7002, 127.0.0.1:7003, 127.0.0.1:7004]
>>
>> # for i in `seq 0 4`;do dog node list -p 700$i;done
>>   Id   Host:Port         V-Nodes       Zone
>>    0   127.0.0.1:7000           128   16777343
>>    1   127.0.0.1:7001           128   16777343
>>    2   127.0.0.1:7002           128   16777343
>>    3   127.0.0.1:7003           128   16777343
>>    4   127.0.0.1:7004           128   16777343
>>   Id   Host:Port         V-Nodes       Zone
>>    0   127.0.0.1:7000           128   16777343
>>    1   127.0.0.1:7001           128   16777343
>>    2   127.0.0.1:7002           128   16777343
>>    3   127.0.0.1:7003           128   16777343
>>    4   127.0.0.1:7004           128   16777343
>>   Id   Host:Port         V-Nodes       Zone
>>    0   127.0.0.1:7000           128   16777343
>>    1   127.0.0.1:7001           128   16777343
>>    2   127.0.0.1:7002           128   16777343
>>    3   127.0.0.1:7003           128   16777343
>>    4   127.0.0.1:7004           128   16777343
>>   Id   Host:Port         V-Nodes       Zone
>>    0   127.0.0.1:7000           128   16777343
>>    1   127.0.0.1:7001           128   16777343
>>    2   127.0.0.1:7002           128   16777343
>>    3   127.0.0.1:7003           128   16777343
>>    4   127.0.0.1:7004           128   16777343
>>   Id   Host:Port         V-Nodes       Zone
>>    0   127.0.0.1:7000           128   16777343
>>    1   127.0.0.1:7001           128   16777343
>>    2   127.0.0.1:7002           128   16777343
>>    3   127.0.0.1:7003           128   16777343
>>    4   127.0.0.1:7004           128   16777343
>>
>>
> 
> Everything looks fine. It is very weird. And with 5 nodes you just write 1 copy
> succeed. I have no idea what happened and I can't reproduce the problem on my
> local machine.

I started only two sheeps and turned on debug log level on nodes. There
is something suspect for me in master (port 7000) sheep.log:
Jan 08 14:01:58  DEBUG [main] clear_client_info(826) connection seems to
be dead

I'm attaching logs from both sheeps.

Marcin

-------------- next part --------------
Jan 08 13:59:44   INFO [main] md_add_disk(311) /mnt/sheep/storage/0, vdisk nr 6, total disk 1
Jan 08 13:59:44   INFO [main] send_join_request(781) IPv4 ip:127.0.0.1 port:7000
Jan 08 13:59:44   INFO [main] check_host_env(478) Allowed open files 1000000, suggested 6144000
Jan 08 13:59:44   INFO [main] main(882) sheepdog daemon (version 0.7.50) started
Jan 08 14:01:00   INFO [io 19431] local_get_loglevel(983) returning log level: 6
Jan 08 14:01:11  DEBUG [main] client_handler(885) 4, 0
Jan 08 14:01:11  DEBUG [main] client_handler(885) 19, 0
Jan 08 14:01:11  DEBUG [main] clear_client_info(826) connection seems to be dead
Jan 08 14:01:11  DEBUG [main] clear_client_info(836) refcnt:0, fd:21, 127.0.0.1:33489
Jan 08 14:01:11  DEBUG [main] destroy_client(817) connection from: 127.0.0.1:33489
Jan 08 14:01:14  DEBUG [main] listen_handler(965) accepted a new connection: 15
Jan 08 14:01:14  DEBUG [main] client_handler(885) 19, 0
Jan 08 14:01:14  DEBUG [main] clear_client_info(826) connection seems to be dead
Jan 08 14:01:14  DEBUG [main] clear_client_info(836) refcnt:0, fd:15, 127.0.0.1:33490
Jan 08 14:01:14  DEBUG [main] destroy_client(817) connection from: 127.0.0.1:33490
Jan 08 14:01:14  DEBUG [main] listen_handler(965) accepted a new connection: 15
Jan 08 14:01:14  DEBUG [main] client_handler(885) 1, 0
Jan 08 14:01:14  DEBUG [main] rx_main(764) 15, 127.0.0.1:33491
Jan 08 14:01:14  DEBUG [main] queue_request(454) GET_LOGLEVEL, 2
Jan 08 14:01:14  DEBUG [io 19431] do_process_work(1393) b9, 0, 0
Jan 08 14:01:14   INFO [io 19431] local_get_loglevel(983) returning log level: 7
Jan 08 14:01:14  DEBUG [main] client_handler(885) 4, 0
Jan 08 14:01:14  DEBUG [main] client_handler(885) 19, 0
Jan 08 14:01:14  DEBUG [main] clear_client_info(826) connection seems to be dead
Jan 08 14:01:14  DEBUG [main] clear_client_info(836) refcnt:0, fd:15, 127.0.0.1:33491
Jan 08 14:01:14  DEBUG [main] destroy_client(817) connection from: 127.0.0.1:33491
Jan 08 14:01:27  DEBUG [main] listen_handler(965) accepted a new connection: 15
Jan 08 14:01:27  DEBUG [main] client_handler(885) 1, 0
Jan 08 14:01:27  DEBUG [main] rx_main(764) 15, 127.0.0.1:33505
Jan 08 14:01:27  DEBUG [main] queue_request(454) GET_NODE_LIST, 2
Jan 08 14:01:27  DEBUG [io 19431] do_process_work(1393) 82, 0, 0
Jan 08 14:01:27  DEBUG [main] client_handler(885) 4, 0
Jan 08 14:01:27  DEBUG [main] client_handler(885) 1, 0
Jan 08 14:01:27  DEBUG [main] rx_main(764) 15, 127.0.0.1:33505
Jan 08 14:01:27  DEBUG [main] queue_request(454) READ_VDIS, 2
Jan 08 14:01:27  DEBUG [io 19431] do_process_work(1393) 15, 0, 0
Jan 08 14:01:27  DEBUG [main] client_handler(885) 4, 0
Jan 08 14:01:27  DEBUG [main] client_handler(885) 1, 0
Jan 08 14:01:27  DEBUG [main] rx_main(764) 15, 127.0.0.1:33505
Jan 08 14:01:27  DEBUG [main] queue_request(454) MAKE_FS, 2
Jan 08 14:01:27  DEBUG [main] queue_cluster_request(300) MAKE_FS (0x7f0d880008c0)
Jan 08 14:01:27  DEBUG [main] add_event(317) type = 6, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:27  DEBUG [main] add_event(319) 0: IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:27  DEBUG [main] add_event(319) 1: IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:27  DEBUG [main] shm_queue_notify(218) send signal to IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:27  DEBUG [main] shm_queue_notify(218) send signal to IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:27  DEBUG [main] local_handler(519) read siginfo
Jan 08 14:01:27  DEBUG [main] local_process_event(426) type = 6, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:27  DEBUG [main] local_process_event(427) callbacked = 0, removed = 0
Jan 08 14:01:27  DEBUG [main] local_process_event(452) 0: IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:27  DEBUG [main] local_process_event(452) 1: IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:27  DEBUG [main] sd_notify_handler(703) op MAKE_FS, size: 102, from: IPv4 ip:127.0.0.1 port:7000
Jan 08 14:01:27  DEBUG [main] default_format(530) try get a clean store
Jan 08 14:01:27  DEBUG [main] default_init(246) use plain store driver
Jan 08 14:01:27  DEBUG [main] update_epoch_log(26) update epoch: 1, 2
Jan 08 14:01:28  DEBUG [main] client_handler(885) 4, 0
Jan 08 14:01:28  DEBUG [main] client_handler(885) 19, 0
Jan 08 14:01:28  DEBUG [main] clear_client_info(826) connection seems to be dead
Jan 08 14:01:28  DEBUG [main] clear_client_info(836) refcnt:0, fd:15, 127.0.0.1:33505
Jan 08 14:01:28  DEBUG [main] destroy_client(817) connection from: 127.0.0.1:33505
Jan 08 14:01:57  DEBUG [main] listen_handler(965) accepted a new connection: 21
Jan 08 14:01:57  DEBUG [main] client_handler(885) 1, 0
Jan 08 14:01:57  DEBUG [main] rx_main(764) 21, 127.0.0.1:33514
Jan 08 14:01:57  DEBUG [main] queue_request(454) GET_NODE_LIST, 1
Jan 08 14:01:57  DEBUG [io 19431] do_process_work(1393) 82, 0, 1
Jan 08 14:01:57  DEBUG [main] client_handler(885) 4, 0
Jan 08 14:01:57  DEBUG [main] client_handler(885) 1, 0
Jan 08 14:01:57  DEBUG [main] rx_main(764) 21, 127.0.0.1:33514
Jan 08 14:01:57  DEBUG [main] queue_request(454) NEW_VDI, 1
Jan 08 14:01:57  DEBUG [main] queue_cluster_request(300) NEW_VDI (0x7f0d880008c0)
Jan 08 14:01:57  DEBUG [main] add_event(317) type = 5, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] add_event(319) 0: IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] add_event(319) 1: IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:57  DEBUG [main] shm_queue_notify(218) send signal to IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] shm_queue_notify(218) send signal to IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:57  DEBUG [main] local_handler(519) read siginfo
Jan 08 14:01:57  DEBUG [main] local_process_event(426) type = 5, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] local_process_event(427) callbacked = 0, removed = 0
Jan 08 14:01:57  DEBUG [main] local_process_event(452) 0: IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] local_process_event(452) 1: IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:57  DEBUG [block] do_process_work(1393) 11, 140000000, 1
Jan 08 14:01:57  DEBUG [block] vdi_lookup(627) testowy left cac836 right cac836, 8
Jan 08 14:01:57  DEBUG [main] queue_request(454) NOTIFY_VDI_ADD, 1
Jan 08 14:01:57  DEBUG [main] queue_cluster_request(300) NOTIFY_VDI_ADD (0x7f0d900008c0)
Jan 08 14:01:57  DEBUG [main] add_event(317) type = 6, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] add_event(319) 0: IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] add_event(319) 1: IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:57  DEBUG [main] shm_queue_notify(218) send signal to IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] shm_queue_notify(218) send signal to IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:57  DEBUG [main] local_handler(519) read siginfo
Jan 08 14:01:57  DEBUG [main] local_process_event(426) type = 6, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] local_process_event(427) callbacked = 0, removed = 0
Jan 08 14:01:57  DEBUG [main] local_process_event(452) 0: IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] local_process_event(452) 1: IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:57  DEBUG [main] sd_notify_handler(703) op NOTIFY_VDI_ADD, size: 96, from: IPv4 ip:127.0.0.1 port:7000
Jan 08 14:01:57  DEBUG [main] add_vdi_state(158) cac836, 2, 0
Jan 08 14:01:57  DEBUG [main] local_process_event(426) type = 5, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] local_process_event(427) callbacked = 1, removed = 0
Jan 08 14:01:57  DEBUG [block] create_vdi(273) testowy: size 5368709120, new_vid cac836, copies 2, snapid 1 copy policy 0store policy 0
Jan 08 14:01:57  DEBUG [main] queue_request(454) CREATE_AND_WRITE_OBJ, 1
Jan 08 14:01:57  DEBUG [gway 19430] do_process_work(1393) 1, 80cac83600000000, 1
Jan 08 14:01:57  DEBUG [gway 19430] gateway_forward_request(504) 80cac83600000000
Jan 08 14:01:57  DEBUG [gway 19430] prepare_replication_requests(38) 80cac83600000000
Jan 08 14:01:57  DEBUG [gway 19430] sockfd_cache_get_long(344) create cache connection 127.0.0.1:7000 idx 0
Jan 08 14:01:57  DEBUG [gway 19430] connect_to(209) 23, 127.0.0.1:7000
Jan 08 14:01:57  DEBUG [main] listen_handler(965) accepted a new connection: 24
Jan 08 14:01:57  DEBUG [main] client_handler(885) 1, 0
Jan 08 14:01:57  DEBUG [gway 19430] gateway_forward_request(563) nr_sent 1, err 0
Jan 08 14:01:57  DEBUG [main] rx_main(764) 24, 127.0.0.1:33515
Jan 08 14:01:57  DEBUG [main] queue_request(454) CREATE_AND_WRITE_PEER, 1
Jan 08 14:01:57  DEBUG [io 19431] do_process_work(1393) a3, 80cac83600000000, 1
Jan 08 14:01:57  DEBUG [io 19431] default_create_and_write(351) 80cac83600000000
Jan 08 14:01:58  DEBUG [main] client_handler(885) 4, 0
Jan 08 14:01:58  DEBUG [gway 19430] wait_forward_request(431) 0, revents 1
Jan 08 14:01:58  DEBUG [gway 19430] sockfd_cache_put_long(372) 127.0.0.1:7000 idx 0
Jan 08 14:01:58  DEBUG [gway 19430] forward_info_update(340) 1, 0
Jan 08 14:01:58  DEBUG [main] cluster_op_done(234) NEW_VDI (0x7f0d880008c0)
Jan 08 14:01:58  DEBUG [main] add_event(317) type = 6, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:58  DEBUG [main] add_event(319) 0: IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:58  DEBUG [main] add_event(319) 1: IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:58  DEBUG [main] shm_queue_notify(218) send signal to IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:58  DEBUG [main] shm_queue_notify(218) send signal to IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:58  DEBUG [main] local_handler(519) read siginfo
Jan 08 14:01:58  DEBUG [main] local_process_event(426) type = 6, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:58  DEBUG [main] local_process_event(427) callbacked = 0, removed = 0
Jan 08 14:01:58  DEBUG [main] local_process_event(452) 0: IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:58  DEBUG [main] local_process_event(452) 1: IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:58  DEBUG [main] sd_notify_handler(703) op NEW_VDI, size: 352, from: IPv4 ip:127.0.0.1 port:7000
Jan 08 14:01:58  DEBUG [main] post_cluster_new_vdi(121) done 0 cac836
Jan 08 14:01:58  DEBUG [main] local_process_event(426) type = 5, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:58  DEBUG [main] local_process_event(427) callbacked = 1, removed = 1
Jan 08 14:01:58  DEBUG [main] client_handler(885) 4, 0
Jan 08 14:01:58  DEBUG [main] client_handler(885) 19, 0
Jan 08 14:01:58  DEBUG [main] clear_client_info(826) connection seems to be dead
Jan 08 14:01:58  DEBUG [main] clear_client_info(836) refcnt:0, fd:21, 127.0.0.1:33514
Jan 08 14:01:58  DEBUG [main] destroy_client(817) connection from: 127.0.0.1:33514
-------------- next part --------------
Jan 08 13:59:46   INFO [main] md_add_disk(311) /mnt/sheep/storage/1, vdisk nr 5, total disk 1
Jan 08 13:59:46   INFO [main] send_join_request(781) IPv4 ip:127.0.0.1 port:7001
Jan 08 13:59:46   INFO [main] check_host_env(478) Allowed open files 1000000, suggested 6144000
Jan 08 13:59:46   INFO [main] main(882) sheepdog daemon (version 0.7.50) started
Jan 08 14:01:54  DEBUG [main] client_handler(885) 4, 0
Jan 08 14:01:54  DEBUG [main] client_handler(885) 19, 0
Jan 08 14:01:54  DEBUG [main] clear_client_info(826) connection seems to be dead
Jan 08 14:01:54  DEBUG [main] clear_client_info(836) refcnt:0, fd:20, 127.0.0.1:34713
Jan 08 14:01:54  DEBUG [main] destroy_client(817) connection from: 127.0.0.1:34713
Jan 08 14:01:57  DEBUG [main] local_handler(519) read siginfo
Jan 08 14:01:57  DEBUG [main] local_process_event(426) type = 5, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] local_process_event(427) callbacked = 0, removed = 0
Jan 08 14:01:57  DEBUG [main] local_process_event(452) 0: IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] local_process_event(452) 1: IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:57  DEBUG [main] local_handler(519) read siginfo
Jan 08 14:01:57  DEBUG [main] local_process_event(426) type = 6, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] local_process_event(427) callbacked = 0, removed = 0
Jan 08 14:01:57  DEBUG [main] local_process_event(452) 0: IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] local_process_event(452) 1: IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:57  DEBUG [main] sd_notify_handler(703) op NOTIFY_VDI_ADD, size: 96, from: IPv4 ip:127.0.0.1 port:7000
Jan 08 14:01:57  DEBUG [main] add_vdi_state(158) cac836, 2, 0
Jan 08 14:01:57  DEBUG [main] local_process_event(426) type = 5, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:57  DEBUG [main] local_process_event(427) callbacked = 1, removed = 0
Jan 08 14:01:58  DEBUG [main] local_handler(519) read siginfo
Jan 08 14:01:58  DEBUG [main] local_process_event(426) type = 6, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:58  DEBUG [main] local_process_event(427) callbacked = 0, removed = 0
Jan 08 14:01:58  DEBUG [main] local_process_event(452) 0: IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:58  DEBUG [main] local_process_event(452) 1: IPv4 ip:127.0.0.1 port:7001 pid:19439
Jan 08 14:01:58  DEBUG [main] sd_notify_handler(703) op NEW_VDI, size: 352, from: IPv4 ip:127.0.0.1 port:7000
Jan 08 14:01:58  DEBUG [main] post_cluster_new_vdi(121) done 0 cac836
Jan 08 14:01:58  DEBUG [main] local_process_event(426) type = 5, sender = IPv4 ip:127.0.0.1 port:7000 pid:19425
Jan 08 14:01:58  DEBUG [main] local_process_event(427) callbacked = 1, removed = 1


More information about the sheepdog-users mailing list