|
From: | Dan Parsons |
Subject: | Re: [Gluster-devel] Excessive memory usage with 1.3.12 |
Date: | Wed, 5 Nov 2008 11:43:41 -0800 |
There is a commercial entity behind glusterfs - ZResearch(.com) of which most/all of the developers are employed by. The developers are (imo) very good and quick to respond to nearly every problem, it's just this one particular issue where a response/fix has been a bit slow.
glusterfs is moving from 1.3.x to 1.4.x with some fundamental changes involved but I don't think it's the same as what you mean by "transitional state".
The product has been extremely stable for me (8gbit/s IO spread across 4 servers to 33 cpu nodes, bioinformatics work) and this memory "bug" hasn't caused a crash under real work yet, just testing, but only because our job input size is currently small.
So in summary, I love glusterfs, the devs/company behind it are solid, it performs better for my work than others (pvfs, lustre) - it's just this one io-cache memory bug that's been getting less-than-average attention, and perhaps with this recent spark in attention that will change :)
Dan Parsons On Nov 5, 2008, at 11:20 AM, rhubbell wrote:
On Wed, 2008-11-05 at 19:55 +0100, Lukas Hejtmanek wrote:On Wed, Nov 05, 2008 at 10:08:24AM -0800, Dan Parsons wrote:Lukas, just to confirm your findings, I have the exact same problem andreported it about 2 months ago. Just like you, when all my stuff wasrunning under 32-bit, it wasn't an issue because of the 2GB limit, butnow that I'm using 64-bit for everything, it is a potential system crashing bug.Yes, it's the same, unfortunately, I have no response from the authors. Sonobody cares?Well somebody cares. Us. But I am new here and wondering how development on this project is funded. Is it all volunteer? Partially funded via commercial offerings? Is the project in a transitional state? Soon to go commercial? _______________________________________________ Gluster-devel mailing list address@hidden http://lists.nongnu.org/mailman/listinfo/gluster-devel
[Prev in Thread] | Current Thread | [Next in Thread] |