gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] AFR problem with 2.0rc4


From: Gordan Bobic
Subject: Re: [Gluster-devel] AFR problem with 2.0rc4
Date: Thu, 19 Mar 2009 11:48:18 +0000
User-agent: RoundCube Webmail/0.2

That's unavoidable to some extent, since the first server is the one that
is authoritative for locking. That means that all reads have to make a hit
on the 1st server, even if the data then gets retrieved from another server
in the cluster. Whether that explains all of the disparity you are seing, I
don't know.

Gordan

On Thu, 19 Mar 2009 12:40:23 +0100, nicolas prochazka
<address@hidden> wrote:
> i understand that, but in this case, i have an other problem :
> it seems that's load balancing between subvolumes does not work very
well,
> the first server in subvolumes list is very often use compare to other
> server ( in read ) = > so
> i 've big ressource network usage and this first server, not in second .
> 
> nicolas
> 
> On Thu, Mar 19, 2009 at 12:08 PM, Gordan Bobic <address@hidden> wrote:
>> On Thu, 19 Mar 2009 16:25:21 +0530, Vikas Gorur <address@hidden>
>> wrote:
>>> 2009/3/19 Gordan Bobic <address@hidden>:
>>>> On Thu, 19 Mar 2009 16:14:18 +0530, Vikas Gorur <address@hidden>
>>>> wrote:
>>>>> 2009/3/19 Gordan Bobic <address@hidden>:
>>>>>> How does this affect adding new servers into an existing cluster?
>>>>>
>>>>> Adding a new server will work --- as and when files are accessed, new
>>>>> extended attributes will be written.
>>>>
>>>> And presumably, permanently removing servers should also work the same
>>>> way?
>>>> I'm only asking because I had a whole array of weird spurious problems
>>>> before when I removed a server and added a new server at the same
time.
>>>
>>> Removing a server might not work so seamlessly, since the new client
>>> will expect smaller size extended attributes whereas the older files
>>> will have bigger ones. IIRC, this was the source of the errors you
>>> faced ("Numerical result out of range"). Fixes for this are on the
>>> way.
>>
>> Ah, OK, that makes sense. Thanks for clearing it up.
>>
>> Now if just the lockup on udev creation (root on glusterfs) in rc4 and
>> the
>> big memory leak I reported get sorted out, I'll have a working system.
;)
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> address@hidden
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]