gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Spurious disconnections / connectivity loss


From: Gordan Bobic
Subject: Re: [Gluster-devel] Spurious disconnections / connectivity loss
Date: Mon, 01 Feb 2010 13:53:29 +0000
User-agent: Thunderbird 2.0.0.22 (X11/20090625)

Stephan von Krawczynski wrote:
On Mon, 01 Feb 2010 11:10:16 +0000
Gordan Bobic <address@hidden> wrote:

Stephan von Krawczynski wrote:
On Sun, 31 Jan 2010 13:37:49 +0000
Gordan Bobic <address@hidden> wrote:

Stephan von Krawczynski wrote:
On Sun, 31 Jan 2010 00:29:55 +0000
Gordan Bobic <address@hidden> wrote:
Slightly offtopic I would like to ask if you, too, experienced glusterfs using
a lot more bandwith than a comparable nfs connection on the server network
side. It really looks a bit like a waste of resources to me...
I haven't noticed bandwidth going "missing", if that's what you mean. I do my replication server-side, so the server replicates the writes n-1 times for n servers, and my cacti graphs are broadly in line with the bandwidth usage expected. If I disconnect all the mirrors except the server I'm connecting to,the bandwidth usage between the client and the server is similar to NFS.

What bandwidth "leakage" are you observing?
My replication is done on client-side, because this is the only way to have it
redundantly access the data if one server goes down (in theory).
If I compare the bandwith used by glusterfs and the bandwith used by nfs for
the same client it is obvious that nfs uses far less bandwith than glusterfs
(comparing use of only one server of course). Interestingly incoming and
outgoing server traffic is merely the same, whereas nfs has far less incoming
traffic (server side), obviously because the client writes a lot less than it
reads.
That's hardly unexpected. If you are using client-side replicate, I'd expect to see the bandwidth requirements multiply with the number of replicas. For all clustered configurations (not limited to glfs) I use a separate LAN for cluster communication to ensure best possible throughput/latencies, and specifically in case of glfs, I do server side replicate so that the replicate traffic gets offloaded to that private cluster LAN, so the bandwidth requirements to the clients can be kept down to sane levels.

Sorry, Gordan, but that is completely unexpected. If I am using a client-side
replicate with _one_ server there should be no more traffic expected than by
using nfs. But in fact incoming traffic on server side jumped up to about the
same level as outgoing. And that is obviously bogus.
Please read "comparing use of only one server of course" above.

Sorry, I see what you mean now. Certainly, in a single server case that seems very wrong. I'd rather like to hear the developers' comments on this.

Gordan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]