gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Re-exporting NFS to vmware


From: Christopher Hawkins
Subject: Re: [Gluster-devel] Re-exporting NFS to vmware
Date: Thu, 6 Jan 2011 17:26:57 -0500 (EST)

----- "Gordan Bobic" <address@hidden> wrote:

> On 01/06/2011 06:52 PM, Christopher Hawkins wrote:
> > This is not scalable to more than a few servers, but one
> possibility
> > for this type of setup is to use LVS in conjunction with Gluster
> > storage.
> >
> > Say there were 3 gluster storage nodes doing a 3x replicate. One of
> > them (or a small 4th server) could have a virtual IP that accepts
> > inbound NFS requests and then forwards them to one of the 3 "real
> > server" gluster storage nodes. Each would be listening on NFS,
> > would have all the files, and would be able to serve directly and
> > respond with the IP that the packet was originally addressed to.
> 
> If I am understanding what you are describing correctly, you are
> talking 
> about using LVS in direct routing mode. That's pretty similar to what
> I 
> was describing. This would be scalable right up to the point where the
> 
> LB runs out of bandwidth to handle client responses. This is probably
> 
> fine for situations where clients are heavy readers. It wouldn't scale
> 
> at all of the clients are heavy writers.
> 
> What I was talking about in the previous email is having each service
> 
> node act as a LB itself, and able to load balance traffic to other
> nodes 
> based on which the actual files reside - it'd have to be a GlusterFS 
> translator because it would have to look up what node has the file and
> 
> pass the request to that node.
> 
> It would essentially require a GlusterFS translator that combines LVS
> 
> functionality with the GLFS interfaces to establish which node has the
> 
> requested file.
> 
> And if I were to find myself with a month or two of coding time on my
> 
> hands I might even be tempted to implement such a thing - but don't
> hold 
> your breath, it's unlikely to happen any time soon. :)
> 
> Gordan
> 

How true.  ;)   Yes, that's exactly what I meant. My suggestion was kind of a 
poor man's (or lazy man's!) version of what you were thinking about. It should 
allow a few servers to break 500MB/s no problem assuming mostly reads, but 
beyond that it's not super useful. 

To flesh out the original idea:

Any node can take an NFS connection. This works currently, right? I have not 
used the NFS capability yet so I really don't know. If so then any node could 
take a request (an external load balancer or LVS would be needed), look up the 
node with the file, and pass the request along. Then a potential translator 
could tell a brick to respond directly to the client and spoof the IP of the 
NFS server that received the original request. Seems like a straightforward 
translator for someone in the know, and probably could be lifted somewhat from 
the LVS codebase. 

This would allow you to completely load balance the NFS reads and writes across 
the cluster. Should be a significant performance gain on a loaded NFS cluster 
like the one in question, especially if the storage nodes all communicate using 
Infiniband.  

Chris



reply via email to

[Prev in Thread] Current Thread [Next in Thread]