gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] ZkFarmer


From: Jeff Darcy
Subject: Re: [Gluster-devel] ZkFarmer
Date: Tue, 08 May 2012 08:42:19 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120430 Thunderbird/12.0.1

On 05/08/2012 12:27 AM, Ian Latter wrote:
> The equivalent configuration in a glusterd world (from
> my experiments) pushed all of the distribute knowledge
> out to the client and I haven't had a response as to how
> to add a replicate on distributed volumes in this model,
> so I've lost replicate.

This doesn't seem to be a problem with replicate-first vs. distribute-first,
but with client-side vs. server-side deployment of those translators.  You
*can* construct your own volfiles that do these things on the servers.  It will
work, but you won't get a lot of support for it.  The issue here is that we
have only a finite number of developers, and a near-infinite number of
configurations.  We can't properly qualify everything.  One way we've tried to
limit that space is by preferring distribute over replicate, because replicate
does a better job of shielding distribute from brick failures than vice versa.
Another is to deploy both on the clients, following the scalability rule of
pushing effort to the most numerous components.  The code can support other
arrangements, but the people might not.

BTW, a similar concern exists with respect to replication (i.e. AFR) across
data centers.  Performance is going to be bad, and there's not going to be much
we can do about it.

> But in this world, the client must
> know about everything and the server is simply a set
> of served/presented disks (as volumes).  In this
> glusterd world, then, why does any server need to
> know of any other server, if the clients are doing all of
> the heavy lifting?

First, because config changes have to apply across servers.  Second, because
server machines often spin up client processes for things like repair or
rebalance.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]