gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] performance


From: Kevan Benson
Subject: Re: [Gluster-devel] performance
Date: Tue, 06 Nov 2007 11:13:42 -0800
User-agent: Thunderbird 2.0.0.6 (X11/20070728)

Chris Johnson wrote:
On Tue, 6 Nov 2007, Kevan Benson wrote:


1) Stock fuse, or glusterfs patched fuse? See http://ftp.zresearch.com/pub/gluster/glusterfs/fuse/. The Glusterfs team has some changes to some default values in fuse to make it perform better for common glusterfs scenarios, as well as a fix for locking, so you are better off using the glusterfs supplied fuse if you want better performance and or locking.

     Stock because I could get an RPM for it.

If you already have an RPM for 2.7.0, just get the SRPM and modify the line referencing the package to use from fuse-2.7.0..tar.gz to fuse-2.7.0-glfs5.tar.gz, and build.

If you aren't familiar with building RPMs, or don't have time, I suggest removing the RPM for fuse and compiling from source. Not as nice as keeping everything in an RPM, but it might be worth it.

There's a trade-off here between performance/compatibility and ease of administration, it's up to you to decide which one's more important.

2) The read-ahead and write-behind translators are there to boost performance for certain scenarios if you know the types of access your mount will be doing much of the time.

     Serial reads and writes mostly.  Very little if any random stuff.

I'm not the best person to tell you when or how to use those translators, but they are there and can probably help.

3) The real speed benefits arise when you are able to span reads across multiple servers, increasing response and transfer rate. This is where the real benefits are (as well as redundancy), which NFS can't really compete with (unless you're using Solaris).

     Striping?  I thought that was frowned upon.

Not the stripe translator, but AFR (if and when it supports striped reading to some degree) and/or unify. If you are disk bound, you could put up four servers and unify them to achieve a theoretical 4X speedup in reads and writes (of multiple files) using unify (without AFR redundancy). You wouldn't necessarily see a speedup in a single read or write operation, but in most cases that's not what you want to be looking at.

4) That's a real close benchmark. Are you sure the medium over which you are transferring the data isn't maxed? IB or TCP/IP? 100Mbit or 1000Mbit (and server grade or workstation grade cards if gigabit).


     The servers are on gigabit.  It was a preliminary test.  I need
to do it over known gigabit on both ends.  That's next.

I suspect that if your benchmarks are close to the point of 1 ms, you are seeing a limitation in the disk at either end (are you saving the file to a local disk, ramdisk or throwing it away?), or a limitation in the network. I don't usually see a 1 ms difference in concurrent tests on the same setup, but then I'm not entirely sure I know exactly what you mean by 1ms/read slower or how you came by that number. Can you elaborate on your testing?


--

-Kevan Benson
-A-1 Networks




reply via email to

[Prev in Thread] Current Thread [Next in Thread]