On 04/09/2012 10:47 AM, MORITA Kazutaka wrote: > On Mon, Apr 9, 2012 at 10:28 AM, Liu Yuan <namei.unix at gmail.com> wrote: >> On 04/06/2012 12:09 AM, MORITA Kazutaka wrote: >> >>> At Thu, 1 Mar 2012 10:20:18 +0800, >>> Liu Yuan wrote: >>>> >>>> +} >>>> + >>>> +notrace struct caller *trace_lookup_ip(unsigned long ip, int create) >>>> +{ >>>> + int h = trace_hash(ip); >>>> + struct hlist_head *head = trace_hashtable + h; >>>> + struct hlist_node *node; >>>> + struct ipinfo info; >>>> + struct caller *new = NULL; >>>> + >>>> + pthread_mutex_lock(&trace_lock); >>>> + if (hlist_empty(head)) >>> >>> hlist_empty() calls trace_lookup_ip() again if it is not inlined, and >>> it results in a deadlock. >>> >>> How about preventing a recursive call of trace_lookup_ip()? >>> >>> E.g.: >>> >>> notrace struct caller *trace_lookup_ip(unsigned long ip, int create) >>> { >>> int h; >>> struct hlist_head *head; >>> struct hlist_node *node; >>> struct ipinfo info; >>> struct caller *new = NULL; >>> static __thread int looking_up_ip; >>> >>> if (looking_up_ip) >>> return NULL; >>> >>> looking_up_ip = 1; >>> >>> pthread_mutex_lock(&trace_lock); >>> >>> ... >>> >>> pthread_mutex_unlock(&trace_lock); >>> >>> looking_up_ip = 0; >>> return new; >>> } >>> >> >> >> Nope, this bring more problems than it resolves: >> >> 1) return NULL while there entry does exist will break current trace logic > > Is it difficult to change the logic? Yes, kind of hard, currently we call trace_lookup_ip() to check if IP is registered or not, upper layer (graph.c) cannot handle the case that trace_lookup_ip() fails but IP is already registered. > >> 2) __thread variable doesn't make any effect to exclude concurrent access > > Concurrent access is excluded by pthread_mutex_lock(). The __thread > variable just prevents the same thread from re-entering the core of > trace_lookup_ip code. >> 3) one line function of hlist_empty() will be inline for any sane >> compiler if it supports it. > > If we add --enable-debug to configure options, the function is not > inlined. > Even for now, I can't get the point of --enable-debug, which simply add -O0 to gcc, does this mean we don't trust GCC reorder & optimisation? I don't think sheepdog is low level enough to meet this problem. If you really concerns of this, we can add a always_inline attribute to hlist_empty(). >> >> >>> >>>> + goto not_found; >>>> + >>>> + hlist_for_each_entry(new, node, head, hash) { >>>> + if (new->mcount == ip) >>>> + goto out; >>>> + } >>>> +not_found: >>>> + if (get_ipinfo(ip, &info) < 0) { >>>> + dprintf("ip: %lx not found\n", ip); >>>> + new = NULL; >>>> + goto out; >>>> + } >>>> + if (create) { >>>> + /* unlock to avoid deadlock */ >>>> + pthread_mutex_unlock(&trace_lock); >>> >>> I think we don't need unlock here if we prevent recursion. >>> >>> >>>> + new = xzalloc(sizeof(*new)); >>>> + pthread_mutex_lock(&trace_lock); > > IIUC, you call pthread_mutex_unlock()/lock() here because xzalloc() > will call another trace_lookup_ip() and it results in deadlock, yes? > Yes > I think this code is really hard to maintain because we cannot call > other functions which wouldn't be inlined. In addition, it smells > like a race condition since other function can enter the core of > trace_lookup_ip() just after we release the lock here. > Yes, some code of Linux kernel has this race too (A ugly practice), so here I just assume this wouldn't happen. But if we are really concerned of it, we can add 'notrace' to it to get rid of race completely. BTW, speaking of cod maintainability, coroutine is much harder to read & maintain. Thanks, Yuan |