gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] Re: Memory leak(?) in GlusterFS TLA 2.5-patch-616


From: Sam Douglas
Subject: [Gluster-devel] Re: Memory leak(?) in GlusterFS TLA 2.5-patch-616
Date: Tue, 18 Dec 2007 14:45:15 +1300

On Dec 18, 2007 1:43 PM, Sam Douglas <address@hidden> wrote:
> I have come across a fairly large memory leak(?) in glusterfsd using
> the supplied configuration when doing many rewrites (read a block of
> data, change it, write block back).  This results in one of the
> glusterfsd processes consuming very large amounts of the machines
> memory (95% ish).
>
> It seems to occur when readahead and writebehind are loaded above
> Unify on the client, which connects to two remote AFR volumes.

Additionally, it doesn't make a difference if the AFR subvolumes are
protocol/clients (as in the supplied configuration) or both storage/posix
volumes.

I could not replicate the problem with the AFR translators, storage bricks
and unify translators all loaded in the client.

> This occurs in glusterfs--main--2.5--patch-616.
>
> I have been able to replicate the problem on Debian Etch machines,
> using a fuse-2.7.0 compiled from source and a recent kernel and the
> following configurations.
>
> The attached benchmark program can be used to cause the problem
>     rewrite_test --blocksize 8192 --count 8192 /mnt/foobar
> should do it. Bonnie will also cause the problem when it does its rewrite 
> test.
>
>     Sam Douglas
>
>
>
> -- Config Files --
>
> --- Client ---
>
> volume clientA
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 10.0.0.19
>         option remote-subvolume afr
> end-volume
>
> volume clientB
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 10.0.0.20
>         option remote-subvolume afr
> end-volume
>
>
> volume namespace
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 10.0.0.19
>         option remote-subvolume namespace
> end-volume
>
> volume unify
>         type cluster/unify
>         option namespace namespace
>         option scheduler rr
>         subvolumes clientA clientB
> end-volume
>
> volume unify-wb
>         type performance/write-behind
>         option aggregate-size 128K
>         subvolumes unify
> end-volume
>
> volume unify-ra
>         type performance/read-ahead
>         option page-size 65536
>         option page-count 32
>         subvolumes unify-wb
> end-volume
>
>
> --- Server (10.0.0.19) ---
>
> volume brick1
>         type storage/posix
>         option directory /stuff/test1
> end-volume
>
> volume brick2
>         type storage/posix
>         option directory /stuff/test2
> end-volume
>
> volume namespace
>         type storage/posix
>         option directory /stuff/test-namespace
> end-volume
>
> volume client1
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 10.0.0.20
>         option remote-subvolume brick1
> end-volume
>
> volume afr
>         type cluster/afr
>         subvolumes brick1 client1
> end-volume
>
> volume server
>         type protocol/server
>         option transport-type tcp/server
>         option auth.ip.afr.allow *
>         option auth.ip.brick1.allow *
>         option auth.ip.brick2.allow *
>         option auth.ip.namespace.allow *
>         subvolumes afr brick1 brick2 namespace
> end-volume
>
>
> --- Server (10.0.0.20) ---
>
> volume brick1
>         type storage/posix
>         option directory /stuff/test1
> end-volume
>
> volume brick2
>         type storage/posix
>         option directory /stuff/test2
> end-volume
>
> volume client2
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 10.0.0.19
>         option remote-subvolume brick2
> end-volume
>
> volume afr
>         type cluster/afr
>         subvolumes client2 brick2
> end-volume
>
> volume server
>         type protocol/server
>         option transport-type tcp/server
>         option auth.ip.afr.allow *
>         option auth.ip.brick1.allow *
>         option auth.ip.brick2.allow *
>         subvolumes afr brick1 brick2
> end-volume
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]