gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] ping timeout


From: Michael Cassaniti
Subject: Re: [Gluster-devel] ping timeout
Date: Thu, 25 Mar 2010 17:47:28 +1100
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.8) Gecko/20100310 Thunderbird/3.0.3

On 03/25/10 10:21, Gordan Bobic wrote:
Christopher Hawkins wrote:
Correct me if I'm wrong, but something I would add to this debate is the type of split brain we are talking about. Glusterfs is quite different from GFS or OCFS2 in a key way, in that it is an overlay FS that uses locking to control who writes to the underlying files and how they do it.

It is not a cluster FS the way GFS is a cluster FS. For example if GFS has split brain, then fencing is the only thing preventing the complete destruction of all data as both nodes (assuming only two) write to the same disk at the same time and utterly destroy the filesystem. But glusterfs is passing writes to EXT3 or whatever, and at worst you get out of date files or lost updates, not a useless partition that used to have your data...

I think less stringent controls are appropriate in this case, and that GFS / OCFS2 are entirely different animals when it comes to how severe a split brain can be. They MUST be strict about fencing, but with Glusterfs you have a choice about how strict you need it to be.

Not really. The only reason it is less bad is because the corruption will affect individual files, rather than the complete file system. Granted, this is much better than hosing the entire file system, but the fact remains that you get left with files that cannot be healed without manual intervention or explicitly specifying which node should win with the favorite-child option.
Gordon,
Can you suggest how you would successfully manage to get the first node in your scenario in sync?
If I have your mentioned scenario right, including what you believe should happen:
  • First node goes down. Simple enough.
  • Second node has new file operations performed on it that the first node does not get.
  • First node comes up. It is completely fenced from all other machines to get itself in sync with the second node.
  • Second node goes down. Is it before/after first node is synced?
    • If it is before then you have a fully isolated FS that is not accessible.
    • If it is after then you don't have a problem.
I would suggest writing a script and performing some firewalling to perform the fencing. I believe you can run ls -R on the file-system to get it in sync. You would need to mount glfs locally on the first node, get it in sync, then open the firewall ports afterward. Is that an appropriate solution?

By the way, good job so far guys on GlusterFS. I think you still have a way to go, and I think replicated setups should be a heavy focus area. There is no other product to my knowledge that is FOSS and capable of replication without shared disks, while still being POSIX compliant, and including ACLS and xattrs as well.

Regards,
Michael Cassaniti

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]