gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] 1.3.8pre5 glusterfsd WARNING message


From: Krishna Srinivas
Subject: Re: [Gluster-devel] 1.3.8pre5 glusterfsd WARNING message
Date: Wed, 9 Apr 2008 17:22:35 +0530

Daniel,

you don't need unify/namespace for your setup here, you can
directly export gfs-ds-afr from the server which can be mounted
on the client. Can you paste your client spec? if you are connecting
the client to the server which goes down then client mount point
becomes inaccesible, in which case you need to setup DNS
round robin which will connect to the next server after which
the mount point becomes accessible.

This can answer your question to your other mail that was sent just now.

Regards,
Krishna

On Wed, Apr 9, 2008 at 2:55 PM, Daniel Maher <address@hidden> wrote:
> On Sun, 6 Apr 2008 00:42:15 -0700 "Amar S. Tumballi"
>  <address@hidden> wrote:
>
>  > Hi all,
>  >  GlusterFS-1.3.8pre5 (Release candidate for 1.3.8-stable) is
>  > available for download now.
>
>  Thanks for the new release.  I built the RPMs and upgraded my test
>  cluster on 1.3.8pre5.  After restarting glusterfsd on the storage
>  nodes, i noticed the following warning message (which i hadn't seen
>  with the FC8 glusterfsd RPM) :
>
>  2008-04-09 09:14:57 C [unify.c:4158:init] gfs-unify: WARNING: You have
>  defined only one "subvolumes" for unify volume. It may not be the
>  desired config, review your volume spec file. If this is how you are
>  testing it, you may hit some performance penalty
>
>  What does this mean, exactly ?  Based on the wiki, as well as feedback
>  from the list, my config is set up appropriately - has the recommended
>  practice changed for a two-node HA / AFR cluster ?
>
>
>  My unify volume definition :
>  # unify the dataspace and namespace
>  volume gfs-unify
>   type cluster/unify
>   subvolumes gfs-ds-afr
>   option namespace gfs-ns-afr
>   # TODO: study other schedulers
>   option scheduler rr                   # internal round robin-style
>  scheduler end-volume
>
>  My entire server config :
>  http://pastebin.ca/967749
>
>
>  Thanks !
>
>  --
>  Daniel Maher <dma AT witbe.net>
>
>
>  _______________________________________________
>  Gluster-devel mailing list
>  address@hidden
>  http://lists.nongnu.org/mailman/listinfo/gluster-devel
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]