gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] about glusterfs--mainline--3.0--patch-717


From: Anand Avati
Subject: Re: [Gluster-devel] about glusterfs--mainline--3.0--patch-717
Date: Tue, 9 Dec 2008 23:36:13 +0530

Nicolas,
Do you have logs from the client and server?

avati

2008/12/9 nicolas prochazka <address@hidden>:
> hi again,
> about glusterfs--mainline--3.0--patch-727 with same configuration.
> Now glusterfsd seems to be take a lot of cpu ressource > 20 %  , and ls -l
> /glustermount/  is very very long to respond ( > 5 minutes ).
> We can notice that with -719  the issue is not appearing.
>
> Nicolas Prochazka.
>
> 2008/12/8 nicolas prochazka <address@hidden>
>>
>> Thanks it's working now.
>> Regards,
>> Nicolas Prochazka
>>
>> 2008/12/8 Basavanagowda Kanur <address@hidden>
>>>
>>> Nicolas,
>>>   Please use glusterfs--mainline--3.0--patch-719.
>>>
>>> --
>>> gowda
>>>
>>> On Mon, Dec 8, 2008 at 3:07 PM, nicolas prochazka
>>> <address@hidden> wrote:
>>>>
>>>> Hi,
>>>> It seems that  glusterfs--mainline--3.0--patch-717  has a new problem,
>>>> which not appear at least witch  glusterfs--mainline--3.0--patch-710
>>>> Now i've :
>>>> ls: cannot open directory /mnt/vdisk/: Software caused connection abort
>>>>
>>>> Regards,
>>>> Nicolas Prochazka.
>>>>
>>>> my client spec file  :
>>>> volume brick1
>>>> type protocol/client
>>>> option transport-type tcp/client # for TCP/IP transport
>>>> option remote-host 10.98.98.1   # IP address of server1
>>>> option remote-subvolume brick    # name of the remote volume on server1
>>>> end-volume
>>>>
>>>> volume brick2
>>>> type protocol/client
>>>> option transport-type tcp/client # for TCP/IP transport
>>>> option remote-host 10.98.98.2   # IP address of server2
>>>> option remote-subvolume brick    # name of the remote volume on server2
>>>> end-volume
>>>>
>>>> volume afr
>>>> type cluster/afr
>>>> subvolumes brick1 brick2
>>>> end-volume
>>>>
>>>> volume iothreads
>>>> type performance/io-threads
>>>> option thread-count 4
>>>> option cache-size 32MB
>>>> subvolumes afr
>>>> end-volume
>>>>
>>>> volume io-cache
>>>> type performance/io-cache
>>>> option cache-size 256MB             # default is 32MB
>>>> option page-size  1MB              #128KB is default option
>>>> option force-revalidate-timeout 2  # default is 1
>>>> subvolumes iothreads
>>>> end-volume
>>>>
>>>> my server spec-file
>>>> volume brickless
>>>> type storage/posix
>>>> option directory /mnt/disks/export
>>>> end-volume
>>>>
>>>> volume brick
>>>> type features/posix-locks
>>>> option mandatory on          # enables mandatory locking on all files
>>>> subvolumes brickless
>>>> end-volume
>>>>
>>>> volume server
>>>> type protocol/server
>>>> subvolumes brick
>>>> option transport-type tcp
>>>> option auth.addr.brick.allow 10.98.98.*
>>>> end-volume
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-devel mailing list
>>>> address@hidden
>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>>
>>>
>>>
>>>
>>> --
>>> hard work often pays off after time, but laziness always pays off now
>>
>
>
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]