gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Gluster-devel] Rsync failure problem


From: skimber
Subject: RE: [Gluster-devel] Rsync failure problem
Date: Wed, 16 Jul 2008 08:24:20 -0700 (PDT)

Hi Markus,

The problem is it's doing an rsync so it should be able to overwrite the
file, but it doesn't seem to be letting that happen.

What is the O_EXCL-flag ?  Could this be something to do with my speed
issues.

I thought GlusterFS was going to be the solution for me, but if I can't sort
out these speed issues I'm going to have to rethink as I'm starting to feel
a bit out of my depth!! :o(

Simon



Markus Gerstner wrote:
> 
> Hi,
> 
> from what I can see in the source, this error only indicates that the file
> already existed and could therefore not be created. It's probably no
> issue, since it only affects the namespace, so in your - and my - case
> this error message can be ignored.
> Might be worth looking into if the O_EXCL-flag is really needed there.
> 
> Regards,
> Markus
> 
> 
>> -----Original Message-----
>> From: address@hidden
>> [mailto:address@hidden On Behalf
>> Of
>> skimber
>> Sent: Tuesday, July 15, 2008 4:13 PM
>> To: address@hidden
>> Subject: Re: [Gluster-devel] Rsync failure problem
>> 
>> 
>> Further to my message below, I'm getting a lot (thousands?) of errors
>> like
>> this in the glusterfsd server log:
>> 
>> 2008-07-15 14:19:41 E [posix.c:1984:posix_setdents] brick-ns: Error
>> creating
>> file /data/export-ns/mydata/myfile.txt with mode (0100644)
>> 
>> Nothing relevant in syslog or the client logs I don't think.
>> 
>> 
>> 
>> 
>> skimber wrote:
>> >
>> > Thanks for the responses.
>> >
>> > It turned out that the issue was with the disk in one of the clients.
>> > Using the other client machine it appears to be working fine, although
>> it
>> > does seem very slow.
>> >
>> > An ls -l on a directory containing about 150 files took > 5 mins and
>> the
>> > rsync will only go at a rate of roughly one file every 3 seconds,
>> average
>> > size 30 to 50Kb.
>> >
>> > If I do the rsync to the client's local HD instead it's many files per
>> > second.
>> >
>> > I have tried adding the following to the end of the client config from
>> my
>> > original post but it doesn't appear to have made any noticable
>> difference:
>> >
>> > volume readahead
>> >   type performance/read-ahead
>> >   option page-size 128kB        # 256KB is the default option
>> >   option page-count 4           # 2 is default option
>> >   option force-atime-update off # default is off
>> >   subvolumes unify
>> > end-volume
>> >
>> > volume writebehind
>> >   type performance/write-behind
>> >   option aggregate-size 1MB # default is 0bytes
>> >   option flush-behind on    # default is 'off'
>> >   subvolumes readahead
>> > end-volume
>> >
>> > volume io-cache
>> >   type performance/io-cache
>> >   option cache-size 64MB             # default is 32MB
>> >   option page-size 1MB               # 128KB is default option
>> >   option priority *:0                # default is '*:0'
>> >   option force-revalidate-timeout 2  # default is 1
>> >   subvolumes writebehind
>> > end-volume
>> >
>> >
>> > Can anyone tell me if I have done this correctly and/or suggest
>> anything
>> > else I can do to fix this performance issue?
>> >
>> > Thanks
>> >
>> > Simon
>> >
>> 
>> --
>> View this message in context:
>> http://www.nabble.com/Rsync-failure-problem-
>> tp18420195p18466242.html
>> Sent from the gluster-devel mailing list archive at Nabble.com.
>> 
>> 
>> 
>> _______________________________________________
>> Gluster-devel mailing list
>> address@hidden
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> 
> 
> 
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Rsync-failure-problem-tp18420195p18490032.html
Sent from the gluster-devel mailing list archive at Nabble.com.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]