gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Re :Confusion GlusterFS over GFS(Global File system)


From: Gordan Bobic
Subject: Re: [Gluster-devel] Re :Confusion GlusterFS over GFS(Global File system)
Date: Sat, 08 Nov 2008 13:24:34 -0000
User-agent: Thunderbird 3.0a1 (Windows/2008052220)

mohan L wrote:
now my question is
1). what is the difference between sharing file system and sharing block
device? which one is best for shared data base storage .

The answer is in the question. Sharing a block device means sharing a disk. If you have a SCSI array connected to two different machines, both can access the disks, and share them. Similarly, with a SAN that exports iSCSI shares, you can mount those iSCSI shares on multiple machines simultaneously and have concurrent access to them. Concurrent access, however, requires a file system that is aware of the fact that the block device is shared, or the file system will get corrupted. GFS[2] and OCFS[2] are such cluster aware file systems.

If you have a SAN or are sharing block devices using multiply connected SCSI or DRBD with mirrored shared block devices, then this is the solution for you.

GlusterFS, in comparison, is much more like NFS. It stores files on an existing file system (extended attribute support on the underlying file system is required - ext3 supports it as do a number of other file systems), and provides them over the network. It also has added features for mirroring and distributing the data to provide higher performance and/or availability.

2).How do compare GlusterFS and GFS?

It's an apples and oranges comparison. The two are not directly comparable because they are designed for fundamentally different purposes.

3).Is any one can recomand which one is best for our above requirement
with some reson?

If you don't have a SAN or are not mirroring your data on block device level using DRBD, then block device based sharing is not even an option.

If you want 100% availability of all files from all servers, it sounds like you will need to mirror them using GlusterFS and have all clients mount multiple servers. You could also make the servers mirror each other and re-export via NFS (you'll need the patched fuse kernel module), but then you'll need to implement fail-over on the servers. It's probably simpler to side-step the whole issue and let GlusterFS handle the whole operation.

4).we are using MyISAM as storage engine, GlusterFS can support it? Is
there any relevent document related to MySQl with GlustterFS? i googled
it ,but badly i am not get relavent information.

No. Forget it. You COULD mirror the block device using DRBD, put GFS on top and set MySQL to external locking on tables. That would achieve what you are describing, but the performance will be _poor_.

You're better off using MySQL's built-in replication features.

5).I have 8 machine for testing perpose ,but i have installed only two
machine only ,one for GlusterFG server and GlusterFS client on another
machine .using clien volume file i am mounted /var/dir from server to
client . Here i have two issue what is the job of GlusterFS Server and
GlusterFS client?

If you are referring to the process names, they will both show up in ps/top as [glusterfs].

6).Next i want to install GlusterFS server on 5 machine with own local
storage and GlusterFS client on one machine,then i want to mount
different directories from server to one client machine . Is it correct way?

That depends on what you want to achieve. You can have different volumes split up across different servers. Unless you apply mirroring (AFR) translator, you won't get redundancy. There are various ways you can deal with this, depending on how you split your data up. Have a look through the glusterfs wiki for examples of different configurations.

7). mysql default path is /var/lib/mysql/ . how do i mount this
directory to client? can i mount like normal directory?

I'm pretty sure that will just corrupt your databases.

Gordan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]