gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] gluster questions


From: Jonathan Fine
Subject: Re: [Gluster-devel] gluster questions
Date: Tue, 5 Feb 2008 08:51:42 -0500

Brent,

This sounds like exactly what I was looking for. Now a couple of more questions:

Is the TLA code considered production ready? OR is this testing code that may still have some bugs? Since I will be using this for home directories in a research environment, I need something that is stable.

What does disabling direct-io do and how does it affect glusterfs and NFS? I looked it up on the client commandline options page of the wiki, but that didn't really give an in depth explanation.

Thanks for giving me details on the setup of NFS re-exports.

Jon


On Feb 4, 2008, at 9:22 PM, Brent A Nelson wrote:

On Tue, 5 Feb 2008, Matt Paine wrote:

-Concerning NFS and tied into the above question: we use it for it's compatibility with Mac, Linux and Solaris workstations and has worked fairly well. We'd like to stick with it. Would that require running a fuse/gluster client on one of the above nodes and re-exporting it as NFS? That is the way I seem to understand from my searches on the mailing lists. Or can you directly export a gluster brick via nfs? And if running the fuse/gluster client on one of the server nodes w/ exported NFS is necessary, is this a safe way to do things?

There were problems a while ago about exporting glusterfs volumes as NFS exports, but I believe these problems have now been sorted out (if you use the gluster patched fuse client). I am unable to tell you if Mac and Solaris works with glusterfs exports, but all I can sugest there is to download the source and give it a crack :) Or someone else might be able to jump in with a better answer.



I've been testing NFS reexport, and it appears that it's working with very recent TLAs. What you need:

1) A custom-compiled FUSE, rather than the FUSE included with the kernel. You might as well use the latest GlusterFS-patched version.

2) As stated, a very recent TLA of GlusterFS.  You need patch >=642.

3) Mount your GlusterFS with -d DISABLE on the NFS server, as NFS reexport is not currently working with direct-io.

4) Use the fsid=# option on your export in /etc/exports.

That should do it. It at least seems to work for Solaris and Linux clients. I have still seen an issue on Solaris where an idle bash shell cd'ed into the NFS mount may forget its CWD (at least when the GlusterFS is an AFR) and am checking to see if actimeo=0 is a useful workaround. I haven't even confirmed if this is a GlusterFS issue, however, and everything else seems to work fine. There used to be other issues, but all the ones I observed previously have been fixed, thanks to the diligent efforts of the GlusterFS team.

As for GlusterFS mounts directly on Solaris, I haven't tried. I've been watching the mailing lists of FUSE on OpenSolaris (as I'm quite interested in this option), and it appears that it may be workable, but they accepted a patch a few months ago which broke compatibility with older OpenSolaris kernels (such as those found in the OpenSolaris releases, even Solaris Express is too old, perhaps especially for Sparc), so you're practically forced to work with the latest OpenSolaris (Nevada) code.

Thanks,

Brent Nelson
Director of Computing
UF Physics


_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel

************************************************
Jonathan Fine
address@hidden
814-863-4465

Sys Admin/IT Manager
Astronomy and Astrophysics
Eberly College of Science
Pennsylvania State University







reply via email to

[Prev in Thread] Current Thread [Next in Thread]