gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Full bore.


From: Kevan Benson
Subject: Re: [Gluster-devel] Full bore.
Date: Thu, 15 Nov 2007 10:48:58 -0800
User-agent: Thunderbird 2.0.0.9 (X11/20071031)

Chris Johnson wrote:
On Thu, 15 Nov 2007, Kevan Benson wrote:
Chris Johnson wrote:
     Oh, this is ext3 with xattr turned on.  But we could use Reiserfs
if that would help.  I may run tests on that as well.

I don't know about others, but I wouldn't use reiserfs if you care about your data and aren't supplying redundancy above teh file system level (with glusterfs, for example). I've seen multiple reports of how Reiserfs has a tendency to have unrecoverable errors in the file system when it gets sufficiently screwed up, basically necessitating a reformat of the partition.

     Huh.  Now that's interesting.  We've been running our mail server
off Reiserfs for a few years now and never an issue.  It's servived
power outs and dropped drives.  We're using software RAID on it.

I didn't have any problems when I ran it either, but there are lots of horror stories out there. The basic gist was always "fsck did nothing, fs was unrecoverable. WTF?"

At first I dismissed them as people not taking the correct steps to fix problems, but after about of year of hearing of these, I decided I'd rather be safe than sorry.

I read an interesting overview of file system recoverability once from an ext3 developer. Basically, Ext3 view priority #1 as data integrity, and all else flows from that. he had some interesting things to say about ReiserFS and even XFS (or was if JFS?). Basically that ReiserFS suffers from some major integrity problems in the case of crashes, and even XFS (JFS?) has an issue to do with crashes in the middle of file writes, such that a file being written to might be removed entirely (not accessible at all) and unrecoverable if a crash happened at the right point during the write process. It was from an Ext3 developer though, so you can read into that what you want.


     Can we have a discussion on whether I'm heading in the right
direction and what order things go in for the config files?

That depends on your goals. What's important here, speed, redundancy, or a mix of both?


      Both of course.  Are the mutually exclusive?  Please, I know
they can be to some extent.  I'm talking about the real world,
whatever that is.  I need to get the performance up,
redundancy/failover would be be real good too.  NFS has a few problems
with that.

Nope, not mutually exclusive, just requires more hardware to be thrown at it. The nice thing with glusterfs is that that it just needs to be "more", not necessarily "better".

What I finally settled on for my two server setup after all was said and done with my testing, was a simple AFR config with AFR handled on the clients. It writes to both servers and reads from one. After the AFR translator is extended to load balance reads of files across AFR members, I'll get a speed boost from that, and if they extend it to stripe read blocks between AFR members, I'll get even more speed.

I chose this because:
1) It's extremely simple to administer and check consistency. I can just rsync between servers if I don't want to wait for file accesses to triger self heal on everything (I need to sync extended attributes as well, but that's not too hard with a little scripting) 2) I found specifying translators on the clients provided cleaner failover than when specified on the servers. 3) Unify could be used to add performance, but it complicates the shares on the servers. Most the performance gains will be made up when load balancing is integrated into AFR.

Then again, right now my goal isn't performance but redundancy. I think of this as a redundant NFS replacement.

--

-Kevan Benson
-A-1 Networks




reply via email to

[Prev in Thread] Current Thread [Next in Thread]