gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] HA failover question.


From: Chris Johnson
Subject: Re: [Gluster-devel] HA failover question.
Date: Wed, 17 Oct 2007 09:49:39 -0400 (EDT)

On Wed, 17 Oct 2007, Daniel van Ham Colchete wrote:

     Ok, clearly missing something here.  I had no idea you could do
this AFR just from the client side.  That's cool.  And that would
seem to allow for failover too, yes?  You still need two servers.
They self heal if one comes back here?  How does that work?  Does
there need to be server support for this?

     Well aware of the short comings.  When you live off grants and
shoes strings you do what you can.  Neat box, yes.  Weighs a ton.
Unless you strip it you need a fork lift to move it.

     Any other thoughts on how to get as much availabilty out of this
as possible with GlusterFS would be helpfull.

Chris,

so both servers are accessing the same SATABeast to export the same
filesystem? If so, AFR is not what you are looking for. AFR will try to
replicate the files to both servers and they already to this in the
back-end. If you have two different iSCSI virtual disks to each server, them
AFR is what you are looking for.

Right now AFR will always read from the first available server. Not
splitting the read traffic. This will change with GlusterFS 1.4 (with the HA
translator, not sure though). But you can have two AFRs, each with one
server as it's first subvolume. It would be like (doing AFR on the client
side):

=== BEGIN CLIENT SPEC FILE ===
volume s1-b1
       type protocol/client
       option transport-type tcp/client
       option remote-host 172.16.0.1
       option remote-subvolume b1
       option transport-timeout 5
end-volume

volume s2-b1
       type protocol/client
       option transport-type tcp/client
       option remote-host 172.16.0.2
       option remote-subvolume b1
       option transport-timeout 5
end-volume

volume s1-b2
       type protocol/client
       option transport-type tcp/client
       option remote-host 172.16.0.1
       option remote-subvolume b2
       option transport-timeout 5
end-volume

volume s2-b2
       type protocol/client
       option transport-type tcp/client
       option remote-host 172.16.0.2
       option remote-subvolume b2
       option transport-timeout 5
end-volume

volume s1-bn
       type protocol/client
       option transport-type tcp/client
       option remote-host 172.16.0.1
       option remote-subvolume bn
       option transport-timeout 5
end-volume

volume afr1
       type cluster/afr
       subvolumes s1-b1 s2-b1
       option replicate *:2
end-volume

volume afr2
       type cluster/afr
       subvolumes s2-b2 s1-b2
       option replicate *:2
end-volume

volume unify
       type cluster/unify
       subvolumes afr1 afr2
       option namespace s1-bn
       option scheduler rr
       option rr.limits.min-free-disk 5
end-volume
=== END CLIENT SPEC FILE ===

With this you have to replication you need, plus sharing the read traffic
between the front-end storage servers. The write performance is always
limited to the minimum write performance of each server.

Please pay attention to the fact that you have a serious
single-point-of-failure. A fire, electrical problems, human error and many
other things can happen with that single SATABeast. I would have two. I
always pair everything. But I really liked that SATABeast and having 42
disks with only 4U.

What do you think?

Best regards,
Daniel
_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel




------------------------------------------------------------------------------- Chris Johnson |Internet: address@hidden
Systems Administrator       |Web:      http://www.nmr.mgh.harvard.edu/~johnson
NMR Center                  |Voice:    617.726.0949
Mass. General Hospital      |FAX:      617.726.7422
149 (2301) 13th Street      |Man's a kind of missing link
Charlestown, MA., 02129 USA |fondly thinking he can think.  Piet Hein
-------------------------------------------------------------------------------




reply via email to

[Prev in Thread] Current Thread [Next in Thread]