gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] ioc taking too much memory


From: Amar S. Tumballi
Subject: Re: [Gluster-devel] ioc taking too much memory
Date: Thu, 28 Aug 2008 14:38:03 -0700

Hi Dan,
 Its actually a known problem as of now, as io-cache has some problem with
working over stripe. But Surely the memory usage is not right, and currently
treated as bug. I am working on fixing the problem of stripe with io-cache.
Once I finish that, hopefully memory usage bug should go away.

As everyone is busy with testing/bug-fixes of 1.4 branch right now, the
backlog in answering mails.

Regards,
Amar

2008/8/28 Dan Parsons <address@hidden>

> Anyone have any comments on this?
>
>
> Dan Parsons
>
>
>
> On Aug 26, 2008, at 12:32 AM, Dan Parsons wrote:
>
>  I'm running glusterfs 1.3.11. I have cache-size set to '2048MB' in my
>> conf file, but in this particular test I'm running (catting a 6.3GB file to
>> /dev/null), it isn't stopping at 2GB. As of this moment it's gone to 3.8GB
>> and the box only has 4GB RAM; I'm watching curiously to see when the box
>> will crash. I assume this is non-standard behavior? It should stop at 2048MB
>> right?
>>
>> Vitals: CentOS 5.2 64-bit, kernel 2.6.23.14, glusterfs-1.3.11,
>> fuse-2.7.2glfs9
>>
>> Below is my entire config file, though the relevant section is ioc.
>>
>> ### Add client feature and attach to remote subvolume of server1
>> volume distfs01
>> type protocol/client
>> option transport-type tcp/client     # for TCP/IP transport
>> option remote-host 10.8.101.51      # IP address of the remote brick
>> option remote-subvolume brick        # name of the remote volume
>> end-volume
>>
>> ### Add client feature and attach to remote subvolume of server2
>> volume distfs02
>> type protocol/client
>> option transport-type tcp/client     # for TCP/IP transport
>> option remote-host 10.8.101.52      # IP address of the remote brick
>> option remote-subvolume brick        # name of the remote volume
>> end-volume
>>
>> volume distfs03
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host 10.8.101.53
>> option remote-subvolume brick
>> end-volume
>>
>> volume distfs04
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host 10.8.101.54
>> option remote-subvolume brick
>> end-volume
>>
>> volume stripe0
>>  type cluster/stripe
>>  option block-size *:1MB
>>  option scheduler alu
>>  option alu.order read-usage:write-usage:disk-usage
>>  option alu.read-usage.entry-threshold 20%
>>  option alu.read-usage.exit-threshold 4%
>>  option alu.write-usage.entry-threshold 20%
>>  option alu.write-usage.exit-threshold 4%
>>  option alu.disk-usage.entry-threshold 2GB
>>  option alu.disk-usage.exit-threshold 100MB
>>  subvolumes distfs01 distfs02 distfs03 distfs04
>> end-volume
>>
>> volume ioc
>>  type performance/io-cache
>>  subvolumes stripe0         # In this example it is 'client' you may have
>> to change it according to your spec file.
>>  option page-size 1MB      # 128KB is default
>>  option cache-size 2048MB    # 32MB is default
>>  option force-revalidate-timeout 5 # 1second is default
>>  option priority *.psiblast:3,*.seq:2,*:1
>> end-volume
>>
>>
>>
>> Dan Parsons
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> address@hidden
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>>
>
>
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!


reply via email to

[Prev in Thread] Current Thread [Next in Thread]