gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Multiple NFS Servers (Gluster NFS in 3.x, unfsd, kn


From: Gordan Bobic
Subject: Re: [Gluster-devel] Multiple NFS Servers (Gluster NFS in 3.x, unfsd, knfsd, etc.)
Date: Thu, 07 Jan 2010 09:43:11 +0000
User-agent: Thunderbird 2.0.0.22 (X11/20090625)

Anand Avati wrote:
So - I did a redneck test instead - dd 64MB of /dev/zero to a file on the
mounted partition.

On writes, NFS gets 4.4MB/s, GlusterFS (server side AFR) gets 4.6MB/s.
Pretty even.
On reads GlusterFS gets 117MB/s, NFS gets 119MB/s (on the first read after
flushing the caches, after that it goes up to 600MB/s). The difference in
the unbuffered readings seems to be in the sane ball park and the difference
on the reads is roughly what I'd expect considering NFS is running UDP and
GLFS is running TCP.

So in conclusion - there is no performance difference between them worth
speaking of. So what is the point in implementing a user-space NFS handler
in glusterfsd when unfsd seems to do the job as well as glusterfsd could
reasonably hope to?

Can you clarify if your tests had a setup where the NFS re-export
would result in 2 hops for IO? From what you have shown, it looks like
both the tests had just one (physical) network hop (not considering
loopback).

There are 3 servers in AFR, so I would have thought the write test would require the write to get propagated to all the servers, which means the primary server the client is connecting to, plus the two other servers. Or does that not could as multiple hops in terms of what you are describing?

The need for an NFS xlators becomes apparent when you want
to re-export a distributed glusterfs configuration via NFS which can
result in > 1 network hops for IO. Context switches between these two
hops makes things considerably worse, and having NFS xlator inside
glusterfs makes it use the caches in the performance translators very
effectively.

NFS has FS-Cache support on the client. I would have thought this would address that issue.

Gordan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]