gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Re: Unexpected behaviour when self-healing


From: Krishna Srinivas
Subject: Re: [Gluster-devel] Re: Unexpected behaviour when self-healing
Date: Thu, 29 May 2008 13:23:51 +0530

Daniel,

As you guessed it unify+AFR already does the functionality you are talking
about in the "balance" translator.

Lets fix the problem of selfheal you faced when you started this thread,
is it still valid?

Krishna

On Thu, May 29, 2008 at 2:13 AM, Daniel Wirtz <address@hidden> wrote:
> I finally got it working with client side AFR only. I assumed that
> self-healing in Unify was the only possibility but the self-healing in AFR
> already does everything I want currently. Great!
>
> However, I am thinking about some sort of "balance" translator that is able
> to balance files e.g. with a replication count of 3 over all underlying
> datastores. Let's assume all clients configured like this with the
> immaginary balance translator:
>
> volume balance
>  type cluster/balance
>  subvolumes www1, www2, www3, www4, www5
>  option switch *:3
>  option scheduler rr
>  option namespace www-ns
> end-volume
>
> A single file of any type will be balanced to three random servers of the
> five overall servers for redundancy and failure protection. Is this already
> possible? With mixing AFR and Unify there seem to be some possibilities that
> let me chose where to explicitly store which file types but is such a real
> redundant setup also possible? Google also uses a similar approach afaik and
> it is the concept used in MogileFS. This could also be used for AFR (switch
> *:5) or Striping (switch *:1) somewhat easier as it currently is, I think.
> Adding and removing servers would be very easy, too, just by checking all
> files for consistense (let's say by using ls -lR).
>
> regards
> Daniel
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]