gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] about afr


From: nicolas prochazka
Subject: Re: [Gluster-devel] about afr
Date: Thu, 29 Jan 2009 14:43:05 +0100

hello again,
to be more precise,
now i can do 'ls /glustermountpoint ' after timeout in all cases, that's good
but, for files which be opened before the crash of first server, that do not work, process seems to be block.

Regards,
Nicolas.


2009/1/28 nicolas prochazka <address@hidden>
The last patch ( 882,883 )  seems to resolve my problem.
Regards,
Nicolas

2009/1/23 nicolas prochazka <address@hidden>

For my test,
I shutdown interface ( ifconfig eth0 down ) to reproduce this problem.
It seems that's this problem appear with certains application ( lock ? ) for example, i can reproduce it with qemu ( www.qemu.org).
I also notice that's qemu not working with booster , I do not know if this simalry problem ( perharps how qemu or other program open file ?  )

booster debug :
   LD_PRELOAD=/usr/local/lib64/glusterfs/glusterfs-booster.so  /usr/local/bin/qemu -name calcul -k fr -localtime -usb -usbdevice tablet -net vde,vlan=0,sock=/tmpsafe/neoswitch -vnc 10.98.98.1:1 -monitor tcp:127.0.0.1:10229,server,nowait,nodelay -vga std -m 512 -net nic,vlan=0,macaddr=ac:de:48:36:a2:aa,model=rtl8139 -drive file=/mnt/vdisk/images/vm_calcul -no-kvm
*** glibc detected *** /usr/local/bin/qemu: double free or corruption (out): 0x0000000000bd71e0 ***
======= Backtrace: =========
/lib/libc.so.6[0x7f40e28c1aad]
/lib/libc.so.6(cfree+0x76)[0x7f40e28c3796]
/usr/local/bin/qemu[0x49f13f]
/usr/local/bin/qemu[0x461f42]
/usr/local/bin/qemu[0x409400]
/usr/local/bin/qemu[0x40b940]
/lib/libc.so.6(__libc_start_main+0xf4)[0x7f40e2872b74]
/usr/local/bin/qemu[0x405629]
======= Memory map: ========
00400000-005bb000 r-xp 00000000 00:01 92                                 /usr/local/bin/qemu-system-x86_64
007ba000-007bb000 r--p 001ba000 00:01 92                                 /usr/local/bin/qemu-system-x86_64
007bb000-007c0000 rw-p 001bb000 00:01 92                                 /usr/local/bin/qemu-system-x86_64
007c0000-00bf0000 rw-p 007c0000 00:00 0                                  [heap]
7f40dc000000-7f40dc021000 rw-p 7f40dc000000 00:00 0
7f40dc021000-7f40e0000000 ---p 7f40dc021000 00:00 0
7f40e17d1000-7f40e17de000 r-xp 00000000 00:01 5713                       /lib64/libgcc_s.so.1
7f40e17de000-7f40e19dd000 ---p 0000d000 00:01 5713                       /lib64/libgcc_s.so.1
7f40e19dd000-7f40e19de000 r--p 0000c000 00:01 5713                       /lib64/libgcc_s.so.1
7f40e19de000-7f40e19df000 rw-p 0000d000 00:01 5713                       /lib64/libgcc_s.so.1
7f40e19df000-7f40e19e9000 r-xp 00000000 00:01 5772                       /lib64/libnss_files-2.6.1.so
7f40e19e9000-7f40e1be8000 ---p 0000a000 00:01 5772                       /lib64/libnss_files-2.6.1.so
7f40e1be8000-7f40e1be9000 r--p 00009000 00:01 5772                       /lib64/libnss_files-2.6.1.so
7f40e1be9000-7f40e1bea000 rw-p 0000a000 00:01 5772                       /lib64/libnss_files-2.6.1.so
7f40e1bea000-7f40e1bf3000 r-xp 00000000 00:01 5796                       /lib64/libnss_nis-2.6.1.so
7f40e1bf3000-7f40e1df3000 ---p 00009000 00:01 5796                       /lib64/libnss_nis-2.6.1.so
7f40e1df3000-7f40e1df4000 r--p 00009000 00:01 5796                       /lib64/libnss_nis-2.6.1.so
7f40e1df4000-7f40e1df5000 rw-p 0000a000 00:01 5796                       /lib64/libnss_nis-2.6.1.so
7f40e1df5000-7f40e1e09000 r-xp 00000000 00:01 5777                       /lib64/libnsl-2.6.1.so
7f40e1e09000-7f40e2008000 ---p 00014000 00:01 5777                       /lib64/libnsl-2.6.1.so
7f40e2008000-7f40e2009000 r--p 00013000 00:01 5777                       /lib64/libnsl-2.6.1.so
7f40e2009000-7f40e200a000 rw-p 00014000 00:01 5777                       /lib64/libnsl-2.6.1.so
7f40e200a000-7f40e200c000 rw-p 7f40e200a000 00:00 0
7f40e200c000-7f40e2013000 r-xp 00000000 00:01 5814                       /lib64/libnss_compat-2.6.1.so
7f40e2013000-7f40e2212000 ---p 00007000 00:01 5814                       /lib64/libnss_compat-2.6.1.so
7f40e2212000-7f40e2213000 r--p 00006000 00:01 5814                       /lib64/libnss_compat-2.6.1.so
7f40e2213000-7f40e2214000 rw-p 00007000 00:01 5814                       /lib64/libnss_compat-2.6.1.so
7f40e2214000-7f40e2216000 r-xp 00000000 00:01 5794                       /lib64/libdl-2.6.1.so
7f40e2216000-7f40e2416000 ---p 00002000 00:01 5794                       /lib64/libdl-2.6.1.so
7f40e2416000-7f40e2417000 r--p 00002000 00:01 5794                       /lib64/libdl-2.6.1.so
7f40e2417000-7f40e2418000 rw-p 00003000 00:01 5794                       /lib64/libdl-2.6.1.so
7f40e2418000-7f40e2446000 r-xp 00000000 00:01 531                        /usr/local/lib64/libglusterfs.so.0.0.0
7f40e2446000-7f40e2645000 ---p 0002e000 00:01 531                        /usr/local/lib64/libglusterfs.so.0.0.0
7f40e2645000-7f40e2646000 r--p 0002d000 00:01 531                        /usr/local/lib64/libglusterfs.so.0.0.0
7f40e2646000-7f40e2647000 rw-p 0002e000 00:01 531                        /usr/local/lib64/libglusterfs.so.0.0.0
7f40e2647000-7f40e2649000 rw-p 7f40e2647000 00:00 0
7f40e2649000-7f40e2654000 r-xp 00000000 00:01 528                        /usr/local/lib64/libglusterfsclient.so.0.0.0
7f40e2654000-7f40e2853000 ---p 0000b000 00:01 528                        /usr/local/lib64/libglusterfsclient.so.0.0.0
7f40e2853000-7f40e2854000 r--p 0000a000 00:01 528                        /usr/local/lib64/libglusterfsclient.so.0.0.0
7f40e2854000-7f40e2855000 rw-p 0000b000 00:01 528                        /usr/local/lib64/libglusterfsclient.so.0.0.0
7f40e2855000-7f40e298b000 r-xp 00000000 00:01 5765                       /lib64/libc-2.6.1.so
7f40e298b000-7f40e2b8a000 ---p 00136000 00:01 5765                       /lib64/libc-2.6.1.so
7f40e2b8a000-7f40e2b8e000 r--p 00135000 00:01 5765                       /lib64/libc-2.6.1.so
7f40e2b8e000-7f40e2b8f000 rw-p 00139000 00:01 5765                       /lib64/libc-2.6.1.so
7f40e2b8f000-7f40e2b94000 rw-p 7f40e2b8f000 00:00 0
7f40e2b94000-7f40e2b98000 r-xp 00000000 00:01 535                        /usr/local/lib64/libvdeplug.so.2.1.0
7f40e2b98000-7f40e2d97000 ---p 00004000 00:01 535                        /usr/local/lib64/libvdeplug.so.2.1.0
7f40e2d97000-7f40e2d98000 r--p 00003000 00:01 535                        /usr/local/lib64/libvdeplug.so.2.1.0
7f40e2d98000-7f40e2d99000 rw-p 00004000 00:01 535                        /usr/local/lib64/libvdeplug.so.2.1.0
7f40e2d99000-7f40e2de6000 r-xp 00000000 00:01 5816                       /lib64/libncurses.so.5.6
7f40e2de6000-7f40e2ee5000 ---p 0004d000 00:01 5816                       /lib64/libncurses.so.5.6
7f40e2ee5000-7f40e2ef4000 rw-p 0004c000 00:01 5816                       /lib64/libncurses.so.5.6
7f40e2ef4000-7f40e2ef6000 r-xp 00000000 00:01 5704                       /lib64/libutil-2.6.1.so
7f40e2ef6000-7f40e30f5000 ---p 00002000 00:01 5704                       /lib64/libutil-2.6.1.so
7f40e30f5000-7f40e30f6000 r--p 00001000 00:01 5704                       /lib64/libutil-2.6.1.so
7f40e30f6000-7f40e30f7000 rw-p 00002000 00:01 5704                       /lib64/libutil-2.6.1.so
7f40e30f7000-7f40e30ff000 r-xp 00000000 00:01 5513                       /lib64/librt-2.6.1.so
7f40e30ff000-7f40e32fe000 ---p 00008000 00:01 5513                       /lib64/librt-2.6.1.so
7f40e32fe000-7f40e32ff000 r--p 00007000 00:01 5513                       /lib64/librt-2.6.1.so
7f40e32ff000-7f40e3300000 rw-p 00008000 00:01 5513                       /lib64/librt-2.6.1.so
7f40e3300000-7f40e3315000 r-xp 00000000 00:01 5767                       /lib64/libpthread-2.6.1.so
7f40e3315000-7f40e3515000 ---p 00015000 00:01 5767                       /lib64/libpthread-2.6.1.so
7f40e3515000-7f40e3516000 r--p 00015000 00:01 5767                       /lib64/libpthread-2.6.1.so
7f40e3516000-7f40e3517000 rw-p 00016000 00:01 5767                       /lib64/libpthread-2.6.1.so
7f40e3517000-7f40e351b000 rw-p 7f40e3517000 00:00 0
7f40e351b000-7f40e359b000 r-xp 00000000 00:01 5780                       /lib64/libm-2.6.1.so
7f40e359b000-7f40e379a000 ---p 00080000 00:01 5780                       /lib64/libm-2.6.1.so
7f40e379a000-7f40e379b000 r--p 0007f000 00:01 5780                       /lib64/libm-2.6.1.so
7f40e379b000-7f40e379c000 rw-p 00080000 00:01 5780                       /lib64/libm-2.6.1.so
7f40e379c000-7f40e379f000 r-xp 00000000 00:01 515                        /usr/local/lib64/glusterfs/glusterfs-booster.so
7f40e379f000-7f40e399e000 ---p 00003000 00:01 515                        /usr/local/lib64/glusterfs/glusterfs-booster.so
7f40e399e000-7f40e399f000 r--p 00002000 00:01 515                        /usr/local/lib64/glusterfs/glusterfs-booster.so
7f40e399f000-7f40e39a0000 rw-p 00003000 00:01 515                        /usr/local/lib64/glusterfs/glusterfs-booster.so
7f40e39a0000-7f40e39bb000 r-xp 00000000 00:01 5788                       /lib64/ld-2.6.1.so
7f40e3a72000-7f40e3a9a000 rw-p 7f40e3a72000 00:00 0
7f40e3a9a000-7f40e3aae000 r-xp 00000000 00:01 5815                       /lib64/libz.so.1.2.3
7f40e3aae000-7f40e3bad000 ---p 00014000 00:01 5815                       /lib64/libz.so.1.2.3
7f40e3bad000-7f40e3bae000 rw-p 00013000 00:01 5815                       /lib64/libz.so.1.2.3
7f40e3bae000-7f40e3baf000 rw-p 7f40e3bae000 00:00 0
7f40e3bb5000-7f40e3bba000 rw-p 7f40e3bb5000 00:00 0
7f40e3bba000-7f40e3bbb000 r--p 0001a000 00:01 5788                       /lib64/ld-2.6.1.so
7f40e3bbb000-7f40e3bbc000 rw-p 0001b000 00:01 5788                       /lib64/ld-2.6.1.so
7f40e3c00000-7f4105200000 rw-p 00000000 00:0f 5035416                    /hugepages/kvm.XbPD2I (deleted)
7fffebba7000-7fffebbbc000 rw-p 7ffffffea000 00:00 0                      [stack]
7fffebbff000-7fffebc00000 r-xp 7fffebbff000 00:00 0                      [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]



2009/1/23 Krishna Srinivas <address@hidden>

Raghu,

Nicolas sees the problem when the server is hard powered off. Killing
server process seems to work fine for him...

Krishna

On Fri, Jan 23, 2009 at 9:34 AM, Raghavendra G
<address@hidden> wrote:
> Avati,
>
> ls/cd works fine for the test described by Nicolas. In fact, When I killed
> both the glusterfs servers, I got ENOTCONN, but when I started one of the
> servers 'ls' worked fine.
>
> regards,
> On Fri, Jan 23, 2009 at 6:22 AM, Anand Avati <address@hidden> wrote:
>>
>> Nicolas,
>>  Are you running any specific apps on the mountpoint? Or is it just
>> regular ls/cd kind of commands?
>>
>> Raghu,
>>  Can you try to reproduce this in our lab?
>>
>> Thanks,
>> Avati
>>
>> On Wed, Jan 21, 2009 at 9:22 PM, nicolas prochazka
>> <address@hidden> wrote:
>> > Hello,
>> > I think I localise the problem more precisely :
>> >
>> > volume last
>> >   type cluster/replicate
>> >   subvolumes brick_10.98.98.1 brick_10.98.98.2
>> >   end-volume
>> >
>> > if i shutdown 10.98.98.2  , 10.98.98.1 is ok after timeout
>> > if i shutdown 10.98.98.1 , 10.98.98.2 is not ok after timout, it become
>> > ready if 10.98.98.1 comes back
>> >
>> > now if i change to : subvolumes brick_10.98.98.2 brick_10.98.98.1
>> > the situation is inversing.
>> >
>> > In afr doc, you 're telling : default, AFR considers the first subvolume
>> > as
>> > the sole lock server.
>> > perhaps bug comes from here, when lock server down other client does not
>> > work ?
>> >
>> > Regards,
>> > Nicolas Prochazka
>> >
>> >
>> > 2009/1/19 nicolas prochazka <address@hidden>
>> >>
>> >> it is in private network,
>> >> I'm going to try to simulate this issue in virtual qemu environnement,
>> >> I recontact you ,
>> >> Thanks a lot for your great job.
>> >> Nicolas
>> >>
>> >> 2009/1/19 Anand Avati <address@hidden>
>> >>>
>> >>> nicolas,
>> >>>  It is hard for us to debug with such brief description. Is it
>> >>> possible for us to inspect the system with a remote login while this
>> >>> error is created?
>> >>>
>> >>> avati
>> >>>
>> >>> On Mon, Jan 19, 2009 at 8:32 PM, nicolas prochazka
>> >>> <address@hidden> wrote:
>> >>> > hi again,
>> >>> > with tla855 , now if i change network card interface ip, 'ls' test
>> >>> > runs
>> >>> > after timeout, so there's a big progress,
>> >>> > but now, if im stopping server with hard powerdown ( swith on/off as
>> >>> > a
>> >>> > crash)  , this problem persist , i do not understand diffĂ©rence
>> >>> > between
>> >>> > network cut and powerdown.
>> >>> >
>> >>> > Regards,
>> >>> > Nicolas Prochazka
>> >>> >
>> >>> > 2009/1/19 nicolas prochazka <address@hidden>
>> >>> >>
>> >>> >> hi,
>> >>> >> Do you more information about this bug ?
>> >>> >> I do not understand how afr works,
>> >>> >> with my initial configuration, if i change ip of network card (
>> >>> >> from
>> >>> >> 10.98.98.2 => 10.98.98.4 ) on server B during test,
>> >>> >> on client and server (A ,C ) 'ls' works after some timeout, but
>> >>> >>  some
>> >>> >> program seems to be block all system (
>> >>> >> if i run my own program or qemu for example) 'ls' does not respond
>> >>> >> anymore, and if i rechange from 10.98.98.4 => 10.98.98.2 ) then all
>> >>> >> become
>> >>> >> ok again.
>> >>> >>
>> >>> >> Regards,
>> >>> >> Nicolas Prochazka
>> >>> >>
>> >>> >>
>> >>> >> 2009/1/14 Krishna Srinivas <address@hidden>
>> >>> >>>
>> >>> >>> Nicolas,
>> >>> >>>
>> >>> >>> It might be a bug. Let me try to reproduce the problem here and
>> >>> >>> get
>> >>> >>> back
>> >>> >>> to you.
>> >>> >>>
>> >>> >>> Krishna
>> >>> >>>
>> >>> >>> On Wed, Jan 14, 2009 at 6:59 PM, nicolas prochazka
>> >>> >>> <address@hidden> wrote:
>> >>> >>> > hello again,
>> >>> >>> > To finish with this issue and information I can send you :
>> >>> >>> > If i stop glusterfsd  ( on server B) before to stop this server
>> >>> >>> > (
>> >>> >>> > hard
>> >>> >>> > poweroff by pressed on/off ) , the problem does not occur.  If i
>> >>> >>> > hard
>> >>> >>> > poweroff without stop gluster ( a real crash ) problem occur .
>> >>> >>> > Regards
>> >>> >>> > Nicolas Prochazka.
>> >>> >>> >
>> >>> >>> > 2009/1/14 nicolas prochazka <address@hidden>
>> >>> >>> >>
>> >>> >>> >> hi again,
>> >>> >>> >> I continue my tests and :
>> >>> >>> >> In my case, if one file is open on gluster mount during stop of
>> >>> >>> >> one
>> >>> >>> >> afr
>> >>> >>> >> server,
>> >>> >>> >> gluster mount can not be acces ( gap ? ) in this server. All
>> >>> >>> >> other
>> >>> >>> >> client
>> >>> >>> >> ( C for example) which not opening file during stop, isn't
>> >>> >>> >> affect,
>> >>> >>> >> i
>> >>> >>> >> can do
>> >>> >>> >> a ls or open after transport timeout time.
>> >>> >>> >> If i kill the process that's use this file, then i can using
>> >>> >>> >> gluster
>> >>> >>> >> mount
>> >>> >>> >> point without problem.
>> >>> >>> >>
>> >>> >>> >> Regards,
>> >>> >>> >> Nicolas Prochazka.
>> >>> >>> >>
>> >>> >>> >> 2009/1/12 nicolas prochazka <address@hidden>
>> >>> >>> >>>
>> >>> >>> >>> for your attention,
>> >>> >>> >>> it seems that's this problem occur only when files is open and
>> >>> >>> >>> use
>> >>> >>> >>> and
>> >>> >>> >>> gluster mount point .
>> >>> >>> >>> I use big files of computation ( ~ 10G)  with in the most
>> >>> >>> >>> important
>> >>> >>> >>> part,
>> >>> >>> >>> read. In this case problem occurs.
>> >>> >>> >>> If i using only small files which create only some time, no
>> >>> >>> >>> problem
>> >>> >>> >>> occur, gluster mount can use other afr server.
>> >>> >>> >>>
>> >>> >>> >>> Regards,
>> >>> >>> >>> Nicolas Prochazka
>> >>> >>> >>>
>> >>> >>> >>>
>> >>> >>> >>>
>> >>> >>> >>> 2009/1/12 nicolas prochazka <address@hidden>
>> >>> >>> >>>>
>> >>> >>> >>>> Hi,
>> >>> >>> >>>> I'm tryning to set
>> >>> >>> >>>> option transport-timeout 5
>> >>> >>> >>>> in protocol/client
>> >>> >>> >>>>
>> >>> >>> >>>> so a max of 10 seconds before restoring gluster in normal
>> >>> >>> >>>> situation
>> >>> >>> >>>> ?
>> >>> >>> >>>> no success, i always in the same situation, a 'ls
>> >>> >>> >>>> /mnt/gluster'
>> >>> >>> >>>> not
>> >>> >>> >>>> respond after > 10 mins
>> >>> >>> >>>> I can not reuse glustermount exept kill glusterfs process.
>> >>> >>> >>>>
>> >>> >>> >>>> Regards
>> >>> >>> >>>> Nicolas Prochazka
>> >>> >>> >>>>
>> >>> >>> >>>>
>> >>> >>> >>>>
>> >>> >>> >>>> 2009/1/12 Raghavendra G <address@hidden>
>> >>> >>> >>>>>
>> >>> >>> >>>>> Hi Nicolas,
>> >>> >>> >>>>>
>> >>> >>> >>>>> how much time did you wait before concluding the mount point
>> >>> >>> >>>>> to
>> >>> >>> >>>>> be
>> >>> >>> >>>>> not
>> >>> >>> >>>>> working? afr waits for a maximum of (2 * transport-timeout)
>> >>> >>> >>>>> seconds
>> >>> >>> >>>>> before
>> >>> >>> >>>>> returning sending reply to the application. Can you wait for
>> >>> >>> >>>>> some
>> >>> >>> >>>>> time and
>> >>> >>> >>>>> check out is this the issue you are facing?
>> >>> >>> >>>>>
>> >>> >>> >>>>> regards,
>> >>> >>> >>>>>
>> >>> >>> >>>>> On Mon, Jan 12, 2009 at 7:49 PM, nicolas prochazka
>> >>> >>> >>>>> <address@hidden> wrote:
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> Hi.
>> >>> >>> >>>>>> I've installed this model to test Gluster :
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> + 2 servers ( A B )
>> >>> >>> >>>>>>    - with glusterfsd  server  (
>> >>> >>> >>>>>> glusterfs--mainline--3.0--patch-842 )
>> >>> >>> >>>>>>    - with glusterfs  client
>> >>> >>> >>>>>> server conf file .
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> + 1 server C only client mode.
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> My issue :
>> >>> >>> >>>>>> If C open big file in this client configuration and then i
>> >>> >>> >>>>>> stop
>> >>> >>> >>>>>> server
>> >>> >>> >>>>>> A (or B )
>> >>> >>> >>>>>> gluster mount point on server C seems to be block, i can
>> >>> >>> >>>>>> not
>> >>> >>> >>>>>> do
>> >>> >>> >>>>>> 'ls
>> >>> >>> >>>>>> -l'  for example.
>> >>> >>> >>>>>> Is a this thing is normal ? as C open his file on A or B ,
>> >>> >>> >>>>>> then it
>> >>> >>> >>>>>> is
>> >>> >>> >>>>>> blocking when server down ?
>> >>> >>> >>>>>> I was thinking in client AFR, client can reopen file/block
>> >>> >>> >>>>>> an
>> >>> >>> >>>>>> other
>> >>> >>> >>>>>> server , i'm wrong ?
>> >>> >>> >>>>>> Should use HA translator ?
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> Regards,
>> >>> >>> >>>>>> Nicolas Prochazka.
>> >>> >>> >>>>>>
>> >>> >>> >>>>>>
>> >>> >>> >>>>>>
>> >>> >>> >>>>>>
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> volume brickless
>> >>> >>> >>>>>> type storage/posix
>> >>> >>> >>>>>> option directory /mnt/disks/export
>> >>> >>> >>>>>> end-volume
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> volume brick
>> >>> >>> >>>>>> type features/posix-locks
>> >>> >>> >>>>>> option mandatory on          # enables mandatory locking on
>> >>> >>> >>>>>> all
>> >>> >>> >>>>>> files
>> >>> >>> >>>>>> subvolumes brickless
>> >>> >>> >>>>>> end-volume
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> volume server
>> >>> >>> >>>>>> type protocol/server
>> >>> >>> >>>>>> subvolumes brick
>> >>> >>> >>>>>> option transport-type tcp
>> >>> >>> >>>>>> option auth.addr.brick.allow 10.98.98.*
>> >>> >>> >>>>>> end-volume
>> >>> >>> >>>>>> ---------------------------
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> client config
>> >>> >>> >>>>>> volume brick_10.98.98.1
>> >>> >>> >>>>>> type protocol/client
>> >>> >>> >>>>>> option transport-type tcp/client
>> >>> >>> >>>>>> option remote-host 10.98.98.1
>> >>> >>> >>>>>> option remote-subvolume brick
>> >>> >>> >>>>>> end-volume
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> volume brick_10.98.98.2
>> >>> >>> >>>>>> type protocol/client
>> >>> >>> >>>>>> option transport-type tcp/client
>> >>> >>> >>>>>> option remote-host 10.98.98.2
>> >>> >>> >>>>>> option remote-subvolume brick
>> >>> >>> >>>>>> end-volume
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> volume last
>> >>> >>> >>>>>> type cluster/replicate
>> >>> >>> >>>>>> subvolumes brick_10.98.98.1 brick_10.98.98.2
>> >>> >>> >>>>>> end-volume
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> volume iothreads
>> >>> >>> >>>>>> type performance/io-threads
>> >>> >>> >>>>>> option thread-count 2
>> >>> >>> >>>>>> option cache-size 32MB
>> >>> >>> >>>>>> subvolumes last
>> >>> >>> >>>>>> end-volume
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> volume io-cache
>> >>> >>> >>>>>> type performance/io-cache
>> >>> >>> >>>>>> option cache-size 1024MB             # default is 32MB
>> >>> >>> >>>>>> option page-size  1MB              #128KB is default option
>> >>> >>> >>>>>> option force-revalidate-timeout 2  # default is 1
>> >>> >>> >>>>>> subvolumes iothreads
>> >>> >>> >>>>>> end-volume
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> volume writebehind
>> >>> >>> >>>>>> type performance/write-behind
>> >>> >>> >>>>>> option aggregate-size 256KB # default is 0bytes
>> >>> >>> >>>>>> option window-size 3MB
>> >>> >>> >>>>>> option flush-behind on      # default is 'off'
>> >>> >>> >>>>>> subvolumes io-cache
>> >>> >>> >>>>>> end-volume
>> >>> >>> >>>>>>
>> >>> >>> >>>>>>
>> >>> >>> >>>>>> _______________________________________________
>> >>> >>> >>>>>> Gluster-devel mailing list
>> >>> >>> >>>>>> address@hidden
>> >>> >>> >>>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>> >>> >>> >>>>>>
>> >>> >>> >>>>>
>> >>> >>> >>>>>
>> >>> >>> >>>>>
>> >>> >>> >>>>> --
>> >>> >>> >>>>> Raghavendra G
>> >>> >>> >>>>>
>> >>> >>> >>>>
>> >>> >>> >>>
>> >>> >>> >>
>> >>> >>> >
>> >>> >>> >
>> >>> >>> > _______________________________________________
>> >>> >>> > Gluster-devel mailing list
>> >>> >>> > address@hidden
>> >>> >>> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
>> >>> >>> >
>> >>> >>> >
>> >>> >>
>> >>> >
>> >>> >
>> >>> > _______________________________________________
>> >>> > Gluster-devel mailing list
>> >>> > address@hidden
>> >>> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
>> >>> >
>> >>> >
>> >>
>> >
>> >
>
>
>
> --
> Raghavendra G
>
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]