gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak


From: Vijay Bellur
Subject: Re: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak
Date: Thu, 27 Mar 2014 09:26:10 +0530
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0

On 03/27/2014 03:29 AM, Giuseppe Ragusa wrote:
Hi all,
I'm running glusterfs-3.5.20140324.4465475-1.autobuild (from published
nightly rpm packages) on CentOS 6.5 as storage solution for oVirt 3.4.0
(latest snapshot too) on 2 physical nodes (12 GiB RAM) with
self-hosted-engine.

I suppose this should be a good "selling point" for Gluster/oVirt and I
have solved almost all my oVirt problems but one remains:
Gluster-provided NFS (as a storage domain for oVirt self-hosted-engine)
grows (from reboot) to about 8 GiB RAM usage (I even had it die before,
when put under cgroup memory restrictions) in about one day of no actual
usage (only the oVirt Engine VM is running on one node with no other
operations done on it or the whole cluster).

I have seen similar reports on users and devel mailing lists and I'm
wondering how I can help in diagnosing this and/or if it would be better
to rely on latest 3.4.x Gluster (but it seems that the stable line has
had its share of memleaks too...).


Can you please check if turning off drc through:

volume set <volname> nfs.drc off

helps?

-Vijay




reply via email to

[Prev in Thread] Current Thread [Next in Thread]