gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Bug in AFR mode


From: nicolas prochazka
Subject: Re: [Gluster-devel] Bug in AFR mode
Date: Mon, 2 Mar 2009 20:17:15 +0100

hi again,
I just try with last git ( tag 2.0.0 pre26 ) the same problem occurs.

Regards,
Nicolas Prochazka

On Fri, Feb 27, 2009 at 5:05 PM, nicolas prochazka
<address@hidden> wrote:
> tests were doing with version on Wednesday.
> so i retryning test with last version.
> .
>
>
> On Fri, Feb 27, 2009 at 12:55 PM, Anand Avati <address@hidden> wrote:
>>
>> Nicolas,
>>  what is the commit id on which you tested? some bug fixes went into
>> replicate last night.
>>
>> Avati
>>
>> On Fri, Feb 27, 2009 at 5:20 PM, nicolas prochazka
>> <address@hidden> wrote:
>> > Hello
>> > I'm using last gluster from git.
>> > I think there's problem with lock server in AFR mode :
>> >
>> > Test :
>> > Server A and B in AFR
>> >
>> > TEST 1
>> > 1 / install A , B  then copie a file to A : synchro to B is perfect
>> > 2 / erase all B server and resinstall it   : synchronisation is not
>> > possible. ( nothing is doing )
>> >
>> > TEST 2
>> > 1 / install A , B  then copie a file to A (gluster mount point)  : synchro
>> > to B is perfect
>> > 2 / erase all A : reinstall it :  synchro from B is perfect
>> >
>> > Now if a redo TEST 1 , but  in my last volume (volume last) ,  I inverse
>> > brick_10.98.98.1 and 10.98.98.2  in subvolumes, so now it is 10.98.98.1 as
>> > lock server for AFR
>> > TEST 1 work  , TEST 2  not .
>> >
>> > I think it try to use lock server where file does not exist in a case, so
>> > problem occur.
>> > I try to add 2 lock lock server with
>> > option data-lock-server-count 2
>> > option entry-lock-server-count 2
>> >
>> > without success,
>> > i'm trying with 0  , without success.
>> >
>> >
>> > Client config file ( the same for A and B )
>> >
>> > volume brick_10.98.98.1
>> > type protocol/client
>> > option transport-type tcp/client
>> > option transport-timeout 120
>> > option remote-host 10.98.98.1
>> > option remote-subvolume brick
>> > end-volume
>> >
>> >
>> > volume brick_10.98.98.2
>> > type protocol/client
>> > option transport-type tcp/client
>> > option transport-timeout 120
>> > option remote-host 10.98.98.2
>> > option remote-subvolume brick
>> > end-volume
>> >
>> >
>> > volume last
>> > type cluster/replicate
>> > subvolumes brick_10.98.98.2 brick_10.98.98.1
>> > option read-subvolume brick_10.98.98.2
>> > option favorite-child brick_10.98.98.2
>> > end-volume
>> >
>> > volume iothreads
>> > type performance/io-threads
>> > option thread-count 4
>> > subvolumes last
>> > end-volume
>> >
>> > volume io-cache
>> > type performance/io-cache
>> > option cache-size 2048MB             # default is 32MB
>> > option page-size  1MB             #128KB is default option
>> > option cache-timeout 2  # default is 1
>> > subvolumes iothreads
>> > end-volume
>> >
>> > volume writebehind
>> > type performance/write-behind
>> > option block-size 256KB # default is 0bytes
>> > option cache-size 512KB
>> > option flush-behind on      # default is 'off'
>> > subvolumes io-cache
>> > end-volume
>> >
>> >
>> >
>> > Server config for A and B  the same execpt for IP
>> >
>> >
>> > volume brickless
>> > type storage/posix
>> > option directory /mnt/disks/export
>> > end-volume
>> >
>> > volume brickthread
>> > type features/posix-locks
>> > option mandatory on          # enables mandatory locking on all files
>> > subvolumes brickless
>> > end-volume
>> >
>> > volume brickcache
>> > type performance/io-cache
>> > option cache-size 1024MB
>> > option page-size 1MB
>> > option cache-timeout 2
>> > subvolumes brickthread
>> > end-volume
>> >
>> > volume brick
>> > type performance/io-threads
>> > option thread-count 8
>> > option cache-size 256MB
>> > subvolumes brickcache
>> > end-volume
>> >
>> >
>> > volume server
>> > type protocol/server
>> > subvolumes brick
>> > option transport-type tcp
>> > option auth.addr.brick.allow 10.98.98.*
>> > end-volume
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > Gluster-devel mailing list
>> > address@hidden
>> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
>> >
>> >
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]