gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Question about unify over afr


From: Krishna Srinivas
Subject: Re: [Gluster-devel] Question about unify over afr
Date: Sat, 30 Aug 2008 22:13:02 +0530

Tukas,

I am not able to completely understand the problem. What you are saying
is if afr has 2 subvolumes things work fine. If it has more than 2 subvols
then NS does unnecessary reads/writes? or does it do unnecessary mkdirs?
After doing the setup what do I need to do to see the problem that you
are seeing?

Krishna

On Fri, Aug 29, 2008 at 11:58 PM, Łukasz Mierzwa <address@hidden> wrote:
> Friday 29 August 2008 20:21:57 Łukasz Mierzwa napisał(a):
>> Friday 29 August 2008 20:13:45 Łukasz Mierzwa napisał(a):
>> > Thursday 28 August 2008 15:29:06 Łukasz Mierzwa napisał(a):
>> > > Thursday 28 of August 2008 12:39:03 napisałeś(-łaś):
>> > > > On Thu, Aug 28, 2008 at 3:01 PM, Łukasz Mierzwa <address@hidden>
>> >
>> > wrote:
>> > > > > Thursday 28 of August 2008 07:06:30 Krishna Srinivas napisał(a):
>> > > > >> On Wed, Aug 27, 2008 at 10:55 PM, Łukasz Mierzwa
>> > > > >> <address@hidden>
>> > > > >
>> > > > > wrote:
>> > > > >> > Tuesday 26 August 2008 16:28:41 Łukasz Mierzwa napisał(a):
>> > > > >> >> Hi,
>> > > > >> >>
>> > > > >> >> I testing glusterfs for small files storage, first I've setup a
>> > > > >> >> single disk gluster server, connected to it from another
>> > > > >> >> machine and served those files with nginx. That worked ok, I
>> > > > >> >> got good performance, on average about +20ms slower for each
>> > > > >> >> request but that's ok. Now I've setup unify over afr (2 afr
>> > > > >> >> groups with 3 servers each, unify and afr on the client side,
>> > > > >> >> namespace dir is on every server, as other stuff afr'ed on the
>> > > > >> >> client side), this is mounted on one of those 6 servers. After
>> > > > >> >> writing ~200GB files from production server I started to do
>> > > > >> >> some tests and I've noticed that doing simple ls on that mount
>> > > > >> >> point causes as many writes as reads, this has to do something
>> > > > >> >> to either unify or afr, I suspect that those writes are do to
>> > > > >> >> namespace but I need to do more debugging. It's very annoying
>> > > > >> >> that simple reads are causing so many writes. All my servers
>> > > > >> >> are in sync so there should not be any need for sealf-healing.
>> > > > >> >> Before I start debugging it I wanted to ask if this is normal?
>> > > > >> >> Shoud afr or unify generate so many writes to namespace or
>> > > > >> >> maybe xattrs during reads (storage is on ext3 with users_xattrs
>> > > > >> >> on)?
>> > > > >> >
>> > > > >> > I tested it a little bit today and I found out that if I got 1
>> > > > >> > or 2 nodes in my afr group for namespace there are no writes at
>> > > > >> > all while doing ls, if I add one or more nodes they are starting
>> > > > >> > to get writes. WTF?
>> > > > >>
>> > > > >> Do you mean that your NS is getting write() calls when you do
>> > > > >> "ls"?
>> > > > >
>> > > > > It seems so. I will split my NS and DATA bricks to different disks
>> > > > > today so I will be 100% sure. What I am sure now is that I am
>> > > > > getting as many writes as reads when I do "ls" and have more than 2
>> > > > > NS bricks in AFR.
>> > > >
>> > > > reads/writes should not happen when you do an 'ls' where are you
>> > > > seeing reads and writes being done? How are you seeing it? are you
>> > > > strace'ing the glusterfsd?
>> > > >
>> > > > Krishna
>> > >
>> > > I first noticed them when I looked at rrd graphs for those machines, I
>> > > wanted to see if AFR is balancing reads. I can see them in rrd graphs
>> > > generated from collectd, dstat, iotop and iostat, they are happening. I
>> > > first tried to find something in my config and forgot about such
>> > > obvious step as straceing glusterfs. I attach log from one of the
>> > > servers, I straced gluster-server on this machine, You can see that
>> > > there is a lot of mkdir/chown/chmod on files that are already there,
>> > > all bricks were online when I was writing files to gluster client so no
>> > > self-heal should be needed. I've also attached client and server
>> > > configs.
>> >
>> > I attached strace log from gluster-server, this time I removed all but
>> > last two ns servers, I straced brick with ns, no mkdir/chmod this time.
>>
>> Missing attachment
>
> Hmm, something is eating my attachments.
> Grab it from:
> http://doc.grono.org/gl_2ns.log.bz2
>
> --
> Łukasz Mierzwa
>
> Grono.net S.A.
>  ul. Szturmowa 2a, 02-678 Warszawa
>  Sąd Rejonowy dla m.st. Warszawy, XIII Wydział Gospodarczy;
>  Nr KRS 0000292169 , NIP: 929-173-90-15  Regon: 141197097 Kapitał
>  zakładowy: 550.000,00 zł
>  http://grono.net/
>
>  Treść tej wiadomości jest poufna i prawnie chroniona. Odbiorca może być
>  jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie
>  jesteś adresatem niniejszej wiadomości, jej rozpowszechnianie,
>  kopiowanie, rozprowadzanie lub inne działanie o podobnym charakterze
>  jest prawnie zabronione i może by karalne. Jeżeli wiadomość ta trafiła
>  do Ciebie omyłkowo, uprzejmie prosimy o odesłanie jej na adres nadawcy i
>  usunięcie.
>
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>

reply via email to

[Prev in Thread] Current Thread [Next in Thread]