gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak


From: Giuseppe Ragusa
Subject: Re: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak
Date: Sat, 29 Mar 2014 19:34:10 +0100

Hi,
I can confirm that memory usage is perfectly normal now (about 100 MiB), having simply disabled DRC.

Many thanks,
Giuseppe


From: address@hidden
To: address@hidden; address@hidden
Date: Fri, 28 Mar 2014 00:27:07 +0100
Subject: Re: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak

Hi,

> Date: Thu, 27 Mar 2014 09:26:10 +0530
> From: address@hidden
> To: address@hidden; address@hidden
> Subject: Re: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak
>
> On 03/27/2014 03:29 AM, Giuseppe Ragusa wrote:
> > Hi all,
> > I'm running glusterfs-3.5.20140324.4465475-1.autobuild (from published
> > nightly rpm packages) on CentOS 6.5 as storage solution for oVirt 3.4.0
> > (latest snapshot too) on 2 physical nodes (12 GiB RAM) with
> > self-hosted-engine.
> >
> > I suppose this should be a good "selling point" for Gluster/oVirt and I
> > have solved almost all my oVirt problems but one remains:
> > Gluster-provided NFS (as a storage domain for oVirt self-hosted-engine)
> > grows (from reboot) to about 8 GiB RAM usage (I even had it die before,
> > when put under cgroup memory restrictions) in about one day of no actual
> > usage (only the oVirt Engine VM is running on one node with no other
> > operations done on it or the whole cluster).
> >
> > I have seen similar reports on users and devel mailing lists and I'm
> > wondering how I can help in diagnosing this and/or if it would be better
> > to rely on latest 3.4.x Gluster (but it seems that the stable line has
> > had its share of memleaks too...).
> >
>
> Can you please check if turning off drc through:
>
> volume set <volname> nfs.drc off
>
> helps?
>
> -Vijay

I'm reinstalling just now to start from scratch with clean logs, configuration etc.
I will report after one day of activity, but from the old system I can already confirm that I had plenty of logs containing:

0-rpc-service: DRC failed to detect duplicates


like in BZ#1008301
Many thanks for your suggestion.

Regards,
Giuseppe


_______________________________________________ Gluster-devel mailing list address@hidden https://lists.nongnu.org/mailman/listinfo/gluster-devel

reply via email to

[Prev in Thread] Current Thread [Next in Thread]