gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Is it safe to use synctask_{wake, yield} outside of


From: Raghavendra Gowdappa
Subject: Re: [Gluster-devel] Is it safe to use synctask_{wake, yield} outside of __wake and __yield macros?
Date: Tue, 19 Feb 2013 01:03:46 -0500 (EST)

> > Where/how is callcnt set, and decremented?
> The following are the two structures which hold 'state' information
> of
> mgmt ops in progress on a synctask,
> struct gd_aggr_ {
>         int call_cnt;
>         int npeers;
>         gf_lock_t lock;
>         int op_ret;
>         int op_errno;
>         struct synctask *task;
>         struct syncargs **args;
> };
> typedef struct gd_aggr_ gd_aggr_t;
> 
> struct gd_local_ {
>         int index;
>         gd_aggr_t *aggr;
> };
> typedef struct gd_local_ gd_local_t;
> 
> A typical mgmt operation's callbk would look as follows,

I hope code snippet below is the complimentary function of gd_lock_op_phase.

> 
> ...
> out:
>  LOCK (&aggr->lock);
>         {
>                 call_cnt = aggr->call_cnt--;

should be call_cnt = --aggr->call_cnt;

>         }
>         UNLOCK (&aggr->lock);
> 
>         gd_local_wipe (local);
>         STACK_DESTROY (frame->root);
> 
>         if (call_cnt) {
shouldn't this be 
          if (call_cnt == 0) {
>                 for (i = 0; i < aggr->npeers; i++){
>                         if (aggr->args[index]->op_ret) {
>                                 aggr->op_ret = -1;
>                                 break;
>                         }
>                 }
>                 synctask_wake (aggr->task);
> ...
> 
> gd_aggr_t is initialised before we start issuing the batch of same
> kind
> of mgmt operation (eg. lock) to all the peers. Each frame's local has 
> a
> pointer to the same single instance of gd_aggr_t. This makes it
> possible to
> have a call_cnt mechanism across frames without another frame being
> used.

The general approach seems to be fine. But you are falling into trap of "not 
initializing call_cnt" before initiating _any of the calls_ (which may result 
in race-condition of waking the task even when locking is not complete on all 
the peers. see the loop in gd_lock_op_phase where you increment call_cnt along 
with initiating lock calls to peers for more clarification).

> Hope that answers your question.
> 
> Krish
> > 
> > 
> > Avati
> > 
> > 
> > 
> > 
> > Let me know if we can come up with a generic framework for what I
> > am
> > trying to do here.
> > 
> > thanks,
> > krish
> > 
> > 
> > 
> > On 02/14/2013 05:23 AM, Anand Avati wrote:
> > 
> > 
> > 
> > 
> > So you are not using the SYNCOP() macro, right? Can you show a code
> > snippet of how you are trying to fan-out and yield? We could
> > probably come up with a generic framework for such
> > fan-out->yield->wake pattern.
> > 
> > 
> > You should be able to call syncop_yield() instead of __yield() if
> > you
> > are _sure_ that the caller is going to be from within a syncenv.
> > 
> > 
> > Avati
> > 
> > 
> > On Wed, Feb 13, 2013 at 11:29 AM, Krishnan Parthasarathi <
> > address@hidden > wrote:
> > 
> > 
> > In glusterd, I am trying to perform a series of syncops in a batch.
> > ie, yield the thread
> > once all the non-blocking operations are queued. The wakeup back to
> > the yielded thread
> > happens as part of the call_cnt mechanism in the callback(s).
> > 
> > Given this, I wanted to know if I would be flouting any of
> > assumptions, if I used
> > synctask_yield and synctask_wake as opposed to their macro
> > counterparts. More specifically,
> > is there a chance that synctask_get() would return NULL on a thread
> > which is part of a syncenv's
> > thread pool?
> > 
> > thanks,
> > krish
> > 
> > 
> > _______________________________________________
> > Gluster-devel mailing list
> > address@hidden
> > https://lists.nongnu.org/mailman/listinfo/gluster-devel
> > 
> > 
> > 
> > 
> 
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]