gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] RFC - "Connection Groups" concept


From: Amar Tumballi
Subject: Re: [Gluster-devel] RFC - "Connection Groups" concept
Date: Fri, 28 Jun 2013 16:55:58 +0530
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130311 Thunderbird/17.0.4

On 06/28/2013 12:37 AM, Anand Avati wrote:

On Wed, Jun 26, 2013 at 8:04 AM, Jeff Darcy <address@hidden
<mailto:address@hidden>> wrote:

    On 06/26/2013 11:42 AM, Joe Julian wrote:

        There are only two translators that use the network, server and
        client. I'm unclear how these communication groups would get
        applied.

        I lean a little bit toward being against solving network problems
        with application complexity. Can't these problems be solved with
        split horizon DNS and/or static routing?


    Some can; some can't.  Even for those that can, some users might prefer
    a solution within GlusterFS - as long as we come up with a coherent
    model - instead of having to deal with DNS or iptables.  One of the
    major problems that can't be solved that way is separate access controls
    for each connection group.  For example, it might be desirable to allow
    mounts from a particular machine (or to a particular volume) only
    through a particular network - both for security reasons and to prevent
    saturation of a critical network with non-critical traffic.

    I think Kaushal's connection-group idea is headed in the right
    direction.  We should use UUIDs as much as possible internally, as using
    either DNS names or IP addresses in this context is error-prone.  There
    should be a way for a CLI user to associate a "nickname" with a
    particular host, using either its UUID or one of its addresses at the
    moment of issuing the command.  Likewise, there should be a simple way
    to associate an interface with a connection group, using any of the
    interface's unique identifiers/properties at the time of association.
    Attaching that connection-group ID to an incoming connection/interface
    is pretty trivial, as is adding it to the "identity" that we use for
    access control.

    The trickier part is figuring out how to associate a connection group
    with a client and route appropriately from that end.  Do we have
    connection-group-specific volfiles?  How do we specify which one we
    want?  Adding more options to mount.glusterfs doesn't seem all that
    appealing, but I don't really see any way around it (obviously without
    the options or special configuration the behavior should be as it is
    now).  The glusterd changes to handle this are likely to be pretty
    tedious, but IMO they're necessary to support some users' requirements.


To figure out which connection a client has to use, we could do
auto-discover at the time of GETSPEC depending on which network
interface the GETSPEC request is coming in from. We already have per
transport client volfiles (one for tcp, one for rdma), and extending it
to per network is natural. Today we ask the client to specify the
transport type in GETSPEC (e.g "volname.rdma") - but even that can be
retired if we start using getsockname() and discover the incoming interface.

This way the client only specifies the appropriate (routable) mount
server IP and everything else is resolved automatically.

This should be better approach IMO too.

Another approach might be to just store the UUID of the host in the
client volfile, as remote-uuid (instead of remote-host option). The
client can query the mount server to resolve the UUID to a host at that
point in time with a HOSTMAPPER service (like our PORTMAPPER server
which maps bricks to ports). This hostmapper can maintain the
relationship of all the host UUIDs in the trusted pool to all their
respective interface IPs, and use the interface of the incoming mapping
request to perform appropriate mapping. When in doubt, it can always
return the entire set of IPs of a host (with transport types) and let
the client figure out which of those IPs are routable and maybe even
autodetect which is the fastest. E.g your server might have both 1g/e
and 10g/e, and only some of your clients have 10g/e. In such cases this
kind of auto discovery at mount time might be desirable.

Thoughts?


I would implement something called 'BRICKMAPPER'. BRICKMAPPER would implement many procedures, which could be also used by some 'showmount' like utility with GlusterFS. (from any clients, not only from RHS server pools with 'glusterd').

-Amar




reply via email to

[Prev in Thread] Current Thread [Next in Thread]