gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Server-side AFR + Failover and mixed server/client-s


From: Krishna Srinivas
Subject: Re: [Gluster-devel] Server-side AFR + Failover and mixed server/client-side AFR
Date: Wed, 7 May 2008 14:12:43 +0530

On Wed, May 7, 2008 at 2:07 PM,  <address@hidden> wrote:
> On Wed, 7 May 2008, Krishna Srinivas wrote:
>
>
> >
> > >
> > > >
> > > > > >  Is this an issue in server-side-only AFR? I have two servers
> which
> > > > > > are also clients of themselves, and they both list their local >
> subvolume first and
> > > > > > remote subvolume second. Is this a problem? What are the possible
> > > > > > consequences of this?
> > > > >
> > > > >  It will be a problem. The "first" subvol is always the "lock"
> server.
> > > > >  Consider a case where you are creating a file simultaneously
> > > > >  on two clients, only one of them should succeed. If AFR's
> > > > >  subvols order are not same, chances are that both client
> > > > >  returns success for file creation with same name.
> > > > >
> > > > >
> > > >
> > > > Hence you have "option read-subvlume" to speeden the
> > > > read() calls so that it can be done from local subvol.
> > > >
> > > >
> > >
> > >  So, what happens if the "lock" server is the one that goes down?  Will
> that
> > > render the whole AFR cluster inoperable, at least for writes?
> > >
> > >
> >
> > If first server is down, the second one is tried and so on. The cluster
> remains
> > operable.
> >
>
>  Does the lock state remain for the locks when the primary/lock server dies?
> If so, how?

If the server on which the lock was held goes down, the lock is lost.
Finally the
client will try to unlock on that server (it remembers) and fails.

In case client dies, the server removes all the locks held by that client.

>
>
>
>  Gordan
>
>
>  _______________________________________________
>  Gluster-devel mailing list
>  address@hidden
>  http://lists.nongnu.org/mailman/listinfo/gluster-devel
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]