gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] glusterfs-1.3.8pre1


From: Sascha Ottolski
Subject: Re: [Gluster-devel] glusterfs-1.3.8pre1
Date: Sat, 23 Feb 2008 06:36:29 +0100
User-agent: KMail/1.9.6 (enterprise 0.20070907.709405)

Am Freitag 22 Februar 2008 18:52:37 schrieb Dan Podeanu:
> Current configuration:
>
> - 16 clients / 16 servers (one client/server on each machine)
> - servers are dual opteron, some of them quad core, 8 or 12 gb ram
> - kernel 2.6.24-2, linux gentoo (can provide gluster ebuilds)
> - fuse 2.7.2glfs8, glusterfs 1.3.7 - see config files- basicly a
> simple unify with no ra/wt cache


Hi Dan,

sorry, I can't give you very much advice.

However, answering your first question, my experiemnts showed that you 
need to upgrade all clients and servers at once if you want to move 
from 1.3.0pre4 to 1.3.7 or later; the different version seem not to be 
able to communicate with each other. I guess a more informative error 
message in the logs would be most helpful...

I'd like to ask if you could share your experience with using glusterfs 
for serving your images; that is, what's the performance of your setup?

I'm trying to do something similar, serving about 20 mio. image files 
with (currently 7) webservers that read them from a 4 server, afr/unify 
gluster mount. unfortunately, the performance is not really making me 
happy, in the live application it never goes beyond ~40 req/sec per 
werbserver, that is, the accumulated performance ist below 300 req/sec. 
I did experiement with different websevers, with varying and sometimes 
amazing results (see my earlier posts if you may). You say you see 300 
reads/second, is this per server, or for the whole cluster? My goal 
would be to achieve serveral thousands requests/second...

If I understand, correctly, you have no afr in your setup, and you keep 
some of the files local to the webserver. I'm wondering, do you have a 
load balancing in place that tries to schedulde the requests in a way 
that the chance is high that the respective webserver has the file in 
question on it's local share?

I also see the memory leak, but as I'm still running a 1.3.0pre4 the 
reason may not be the same. What might be noteable though: The leaks 
only appear on glusterfs clients that to heavy reading (i.e.: the 
webservers). I also have mounted roughly 20 app servers that do writes 
to the gluster-servers, but there appears not to happen such leakage 
(just checked that on one of these servers the glusterfs process 
footprint is as small as 10 MB).

FWIW, I experimented a lot with read-ahead, write-behind, io-threads, 
but could never see any significant difference performance-wise. For 
io-cache, I've seen a gain by roughly 100% in some setups, but none at 
all in others (that is, apache gains, nginx and lighttpd don't).


Thanks,

Sascha




reply via email to

[Prev in Thread] Current Thread [Next in Thread]