gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] problem with mtime


From: Krishna Srinivas
Subject: Re: [Gluster-devel] problem with mtime
Date: Wed, 4 Apr 2007 09:08:51 -0700 (PDT)
User-agent: SquirrelMail/1.4.4

Hi Brent,

What you said is true, GlusterFS' translators approach is very modular.
If some feature does not work you can just remove the related
translator and continue working without much downtime.

Thanks for narrowing down the problem to write-behind translator!
I was not using it in mysetup because of which I did not see
the problem.

The problem is that when we do a "cp -a", it calls utimes()
before close(). In write-behind translator we flush() when
we close(). So any utimes() call made before close() will
be useless if any data in the buffer got flushed when we close()'d.

I am guessing that similar thing is happening with rsync also.

We are discussing possible solutions for this.....

Thanks!
Krishna

PS: different cp versions behave differently, Avati's cp
called close() first and then utimes()

> PS I think this shows the strength of the GlusterFS approach.  Because it
> is so modular, I was able to (hopefully) narrow down the culprit, AND
> I can just shut off the writebehind feature until the bug is resolved!
> Cooool...
>
> On Tue, 3 Apr 2007, Brent A Nelson wrote:
>
>> Sorry for the delay; I had to spend out my budget for the end of our
>> fiscal
>> year, so I was preoccupied.  On the bright-side, most of my budget went
>> to
>> four new server nodes which will all be running GlusterFS. ;-)
>>
>> I was really wondering if it wasn't premature to blame AFR for the mtime
>> issue, so I thought I'd try some other tests.  I recompiled the latest
>> tla
>> and installed on my two test nodes.  I compiled without CFLAGS=-O3
>> (which I
>> would normally set), just in case.  I don't know if the removal of
>> optimization caused it or if it was a recent patch, but this time, I
>> couldn't
>> get ANY file mtimes to copy correctly (directory mtimes, as usual, are
>> fine)!
>>
>> I decided to simplify my client config, eliminating all performance
>> translators and cluster/unify (since this setup is only one mirror
>> pair). It
>> worked! No mtime issues at all.
>>
>> I then started adding back my full configuration, one at a time, and it
>> broke
>> when I added writebehind.  Further, if I strip off all nonessential
>> translators except writebehind on my AFR volume, it still fails the
>> mtime
>> test.
>>
>> So it looks like writebehind (or maybe writebehind on top of AFR) is the
>> culprit!
>>
>> Let me know what you'd like me to try next.  Perhaps writebehind on a
>> non-AFR
>> volume would be a good test? Adjust writebehind settings? Try your trace
>> translator? Confirm that optimization makes it work sometimes? Try your
>> brand
>> new miracle patch that fixes the issue while, at the same time
>> implementing
>> supercompression that doubles your network bandwidth and triples your
>> storage? ;-)
>>
>> Thanks,
>>
>> Brent
>>
>> PS My writebehind config is the same as I mentioned previously:
>>
>> volume writebehind
>>  type performance/write-behind
>>  option aggregate-size 131072 # in bytes
>>  subvolumes mirror0
>> end-volume
>>
>>
>> On Wed, 28 Mar 2007, Krishna Srinivas wrote:
>>
>>> Hi Brent,
>>>
>>> Can you take a fresh tla checkout without my debug diff, and put the
>>> trace
>>> xlator between AFR and protocol/client xlators, i.e as
>>> shown below. Can you send me the log file when you
>>> do "cp -a", see if you can try with as less number of files
>>> as possible, because the log file will be huge.
>>> Also let me know the files for which time stamping failed.
>>>
>>> volume client1-trace
>>>  type debug/trace
>>>  subvolumes client1
>>> end-volume
>>>
>>>
>>> volume client2-trace
>>>  type debug/trace
>>>  subvolumes client2
>>> end-volume
>>>
>>>
>>> volume mirror
>>>  type cluster/afr
>>>  subvolumes client1-trace client2-trace
>>>
>>>
>>>> Hi Brent,
>>>>
>>>> First of all thanks for your help.
>>>>
>>>> I tried to reproduce the problem at my setup but without
>>>> any success. I will send you another diff having more
>>>> debug statements and let us see if we get any clues.
>>>>
>>>> Thanks
>>>> Krishna
>>>>






reply via email to

[Prev in Thread] Current Thread [Next in Thread]