gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Self-heal / inconsistancy issue.


From: Amar S. Tumballi
Subject: Re: [Gluster-devel] Self-heal / inconsistancy issue.
Date: Fri, 13 Jul 2007 10:26:46 +0530

Hi scott,
Perfect. The following picture is perfect. And as you told, performance
translators go below unify (in the spec).

About HA in your requirement, AFR does that work now itself. It even does a
self-heal (ie, if machine1 comes up after a while, and few files are changed
during that time), afr will keep the latest file image in both machines.


On 7/13/07, Scott McNally <address@hidden> wrote:

Alright I will test this out tommorow without the unify.  Just so my
understanding is correct though, a unified afr setup would be like this:

machine 1      machine 2       machine 3     machine 4
              \           /
\      /
               \         /
\    /
                 afr                                                  afr
                   -----------------unify----------------


would the performance translators go above or below the unify? i would
think below ?  Also any ideas on timeline for the client side HA working?
(e.g. machine 1 goes down but that afr set stays up and reads from machine
2 )
------------------------------
*From*: "Amar S. Tumballi" <address@hidden>
*Sent*: Thursday, July 12, 2007 7:28 PM
*To*: "Scott McNally" <address@hidden>
*Subject*: Re: [Gluster-devel] Self-heal / inconsistancy issue.

Hi Scott,
 You are seeing this behaviour due to your spec file. This is a
inconsistency for unify as you have file on both machine1 and machine2, and
according to your spec file *only* unify is used, not afr. (ie you will not
have mirroring ability). As you have just two clients (client1, client2) use
either unify, or afr according to needs.

-amar

On 7/13/07, Scott McNally <address@hidden> wrote:
>
> Sequence to reproduce bug on the version posted to the zresearch site on
> July 12th.  (pre4)
>
> gedit "whatever"  - brand new file on machine 1
> save with a line of text
>
> gedit "whatever" - machine 2 - append a new line of text
>
> return to machine 1 ..
>
> more "whatever"
>
> returns "No such file or directory" rather than opening the file and
> displaying its contents.
>
>
>
> 2007-07-12 16:57:39 D [fuse-bridge.c :413:fuse_getattr] glusterfs-fuse:
> GETATTR 1 ()
> 2007-07-12 16:57:39 D [fuse-bridge.c:413:fuse_getattr] glusterfs-fuse:
> GETATTR 1 ()
> 2007-07-12 16:57:41 D [fuse-bridge.c:413:fuse_getattr] glusterfs-fuse:
> GETATTR 1 ()
> 2007-07-12 16:57:41 D [fuse-bridge.c:413:fuse_getattr] glusterfs-fuse:
> GETATTR 1 ()
> 2007-07-12 16:57:42 D [inode.c:302:__active_inode] fuse/inode:
> activating
> inode(4530001), lru=2/1024
> 2007-07-12 16:57:42 D [ fuse-bridge.c:338:fuse_lookup] glusterfs-fuse:
> LOOKUP
> 1/dealer (/dealer)
> 2007-07-12 16:57:42 D [inode.c:511:__create_inode] namespace2/inode:
> create
> inode(2377228)
> 2007-07-12 16:57:42 D [inode.c:302:__active_inode] namespace2/inode:
> activating inode(2377228), lru=0/1000
> 2007-07-12 16:57:42 E [afr.c:333:afr_lookup_cbk] ERROR: afr.c:
> afr_lookup_cbk: (gic->inode != inode) is true
> 2007-07-12 16:57:42 D [inode.c:260:__destroy_inode] namespace2/inode:
> destroy inode(2377228)
> 2007-07-12 16:57:42 D [inode.c:511:__create_inode] namespace1/inode:
> create
> inode(4530028)
> 2007-07-12 16:57:42 D [inode.c:302:__active_inode] namespace1/inode:
> activating inode(4530028), lru=0/1000
> 2007-07-12 16:57:42 E [afr.c:333:afr_lookup_cbk] ERROR: afr.c:
> afr_lookup_cbk: (gic->inode != inode) is true
> 2007-07-12 16:57:42 D [inode.c:260:__destroy_inode] namespace1/inode:
> destroy inode(4530028)
> 2007-07-12 16:57:42 D [fuse-bridge.c:288:fuse_entry_cbk] glusterfs-fuse:
> ENTRY => 4530001
> 2007-07-12 16:57:42 D [inode.c:332:__passive_inode] fuse/inode:
> passivating
> inode(4530001), lru=3/1024
> 2007-07-12 16:57:42 D [ inode.c:302:__active_inode] fuse/inode:
> activating
> inode(4530001), lru=2/1024
> 2007-07-12 16:57:42 E [afr.c:696:afr_open_cbk] namespace-afr:
> (path=/dealer)
> op_ret=0 op_errno=2
> 2007-07-12 16:57:42 D [inode.c:332:__passive_inode] fuse/inode:
> passivating
> inode(4530001), lru=3/1024
>
>
>
>
>
> server configuration on both machines
>
> volume brick
>   type storage/posix                   # POSIX FS translator
>   option directory /var/glustervolume    # Export this directory
> end-volume
>
> volume brick-ns
>   type storage/posix
>   option directory /var/glusterNamespace
> end-volume
>
> ### Add network serving capability to above brick.
> volume server
>   type protocol/server
>   option transport-type tcp/server     # For TCP/IP transport
>   subvolumes brick brick-ns
>   option auth.ip.brick.allow * # Allow access to "brick" volume
>   option auth.ip.brick-ns.allow *
> end-volume
>
>
> configuration on clients (reverse the ips of course)
>
> ## NAMESPACE  volume
> ## the namespace volume stores the directory structure and
> ## helps with the healing of nodes
> volume namespace1
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 192.168.0.138
>   option transport-timeout 30
>   option remote-subvolume brick-ns
> end-volume
>
> volume namespace2
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 192.168.0.139
>   option transport-timeout 30
>   option remote-subvolume brick-ns
> end-volume
>
> volume namespace-afr
>   type cluster/afr
>   subvolumes namespace1 namespace2
>   option replicate *:2
> end-volume
>
> ##end namespace volume
>
> ## client volumes
>
> volume client1
>   type protocol/client
>   option transport-type tcp/client     # for TCP/IP transport
>   option remote-host 192.168.0.138         # IP address of the remote
> brick
> # option remote-port 6996              # default server port is 6996
>   option transport-timeout 30          # seconds to wait for a reply
>   option remote-subvolume brick        # name of the remote volume
> end-volume
>
> volume client2
>   type protocol/client
>   option transport-type tcp/client     # for TCP/IP transport
>   option remote-host 192.168.0.139         # IP address of the remote
> brick
> # option remote-port 6996              # default server port is 6996
>   option transport-timeout 30          # seconds to wait for a reply
>   option remote-subvolume brick        # name of the remote volume
> end-volume
>
> volume client-afr
>   type cluster/afr
>   subvolumes client1 client2
>   option replicate *:2
> end-volume
>
> ## now unify this 1 brick thing
> volume unify
>   type cluster/unify
>   option scheduler rr  # check alu, random, nufa
>   option rr.limits.min-free-disk 5 # 5% of free disk is minimum.
>   option namespace namespace-afr
>   subvolumes client1 client2
> end-volume
>
> ## Add readahead feature
> volume readahead
>   type performance/read-ahead
>   option page-size 65536     # unit in bytes
>   option page-count 16       # cache per file  = (page-count x
> page-size)
>   subvolumes unify
> end-volume
>
> ## Add IO-Cache feature
> volume iocache
>   type performance/io-cache
>   option page-size 128KB
>   option page-count 128
>   subvolumes readahead
> end-volume
>
> ## Add writeback feature
> volume writeback
>   type performance/write-behind
>   option aggregate-size 131072 # unit in bytes
>   subvolumes iocache
> end-volume
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



--
Amar Tumballi
http://amar.80x25.org
[bulde on #gluster/irc.gnu.org]





--
Amar Tumballi
http://amar.80x25.org
[bulde on #gluster/irc.gnu.org]


reply via email to

[Prev in Thread] Current Thread [Next in Thread]