gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] GlusterFS Process Growing


From: Gordan Bobic
Subject: Re: [Gluster-devel] GlusterFS Process Growing
Date: Wed, 11 Feb 2009 20:31:24 +0000
User-agent: Thunderbird 2.0.0.19 (X11/20090107)

Mounting /tmp/ off a local ext3 file system made no measurable difference. The glusterfs process has grown by nearly 100MB again.

The only access to the local file system during this process should be access to the compiler executables and the related libraries it is dynamically linked against (along with the other build tools).

Can anyone explain or hazard a guess as to what might be leaking and where? Any other info that might be useful?

Gordan

Gordan Bobic wrote:
I just did some more testing, and the glusterfs (rootfs one) process grows like crazy when compiling the kernel - even if the said kernel isn't on the said rootfs, but on an external NFS share!

I can think of only two things that this could be related to:
1) shared library loading/lookups
2) Lots of files being created and then deleted (compiler keeps lots of files in /tmp/ which is on glusterfs)

Either way, it seems that deleted files seem to have handles or some such leaked in the glusterfs process (dcache or elsewhere). The problem is very real and very repeatable. I have just seen my rootfs glusterfs process grow from 60MB to 310MB and it's still growing. It grows at a rate of about 100MB per kernel compilation.

I'm going to test this again with /tmp/ on a local FS to see if it that makes a difference.

Gordan

Gordan Bobic wrote:
On Tue, 10 Feb 2009 20:25:59 +0530, Anand Avati <address@hidden>
wrote:
Is there a parameter to limit this cache? And how does this compare to
the normal page cache on the underlying FS?
It is not possible to limit this cache as it is decided by the dcache
pruning algorithm in the linux kernel vm. You can however force a
forget of all the build cache by doing an 'echo 3 >
/proc/sys/vm/drop_caches'

Interesting, I thought the drop_caches didn't include the caches that
reflect in the process size itself. So the 300MB of the resident size of
glusterfs gets included? I'll have to try this. So, running

# updatedb

and/or

# ls -laR /

should make it go up to maximum size and not grow much further?

The thing that seems particularly odd is that although both the root and
the /home glusterfs daemons seem to grow over time, the growth of the
root
one seems to be vastly greater. The rootfs only contains about 1.2GB of
data, but I have seen it's daemon process get to nearly 300MB. The /home
FS
contains several hundred gigabytes of data, but I haven't seen it grow
to
more than about 60MB. Since the /home FS gets a lot more r/w access, it
seems to be something specific to the rootfs usage that causes the
process
bloat. Perhaps something like shared libraries (all of which are on the
rootfs)? Maybe some kind of a leak in mmap() related code?
Memory usage of GlusterFS is largely decided by the dcache, which is a
reflection of how many files/directories are present in your volume
(and not related to the data size).

OK, thanks for clarifying.

Gordan







reply via email to

[Prev in Thread] Current Thread [Next in Thread]