gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] combining AFR and cluster/unify


From: Krishna Srinivas
Subject: Re: [Gluster-devel] combining AFR and cluster/unify
Date: Wed, 14 Mar 2007 19:00:09 +0530

On 3/14/07, Daniel van Ham Colchete <address@hidden> wrote:
On 3/14/07, Krishna Srinivas <address@hidden> wrote:
>
> Pooya,
>
> Your client spec was wrong. For a 4 node cluster with 2 replicas of
> each file following will be the spec file: (similarly you can write
> for 20 nodes)
>
> ### CLIENT client.vol ####
> volume brick1
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 172.16.30.11
>   option remote-port 6996
>   option remote-subvolume brick
> end-volume
>
> volume brick1-afr
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 172.16.30.12
>   option remote-port 6996
>   option remote-subvolume brick-afr
> end-volume
>
> volume brick2
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 172.16.30.12
>   option remote-port 6996
>   option remote-subvolume brick
> end-volume
>
> volume brick2-afr
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 172.16.30.13
>   option remote-port 6996
>   option remote-subvolume brick-afr
> end-volume
>
> volume brick3
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 172.16.30.13
>   option remote-port 6996
>   option remote-subvolume brick
> end-volume
>
> volume brick3-afr
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 172.16.30.14
>   option remote-port 6996
>   option remote-subvolume brick-afr
> end-volume
>
> volume brick4
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 172.16.30.14
>   option remote-port 6996
>   option remote-subvolume brick
> end-volume
>
> volume brick4-afr
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 172.16.30.11
>   option remote-port 6996
>   option remote-subvolume brick-afr
> end-volume
>
> volume afr1
>   type protocol/client
>   subvolumes brick1 brick1-afr
>   option replicate *:2
> endvolume
>
> volume afr2
>   type protocol/client
>   subvolumes brick2 brick2-afr
>   option replicate *:2
> endvolume
>
> volume afr3
>   type protocol/client
>   subvolumes brick3 brick3-afr
>   option replicate *:2
> endvolume
>
> volume afr4
>   type protocol/client
>   subvolumes brick4 brick4-afr
>   option replicate *:2
> endvolume
>
> volume unify1
>   type cluster/unify
>   subvolumes afr1 afr2 afr3 afr4
> ...
> ..
> endvolume
>

I'm no gluster expert but I think this config will put each file pair in the
same server, doesn't it? Like, volume afr4 uses the brick4 and brick4-afr,
that happend to be on the same server, on it's subvolumes.

Shouldn't it be something like:

volume afr1
 type protocol/client
 subvolumes brick1 brick2-afr
 option replicate *:2
endvolume

volume afr2
 type protocol/client
 subvolumes brick2 brick1-afr
 option replicate *:2
endvolume

volume afr3
 type protocol/client
 subvolumes brick3 brick4-afr
 option replicate *:2
endvolume

volume afr4
 type protocol/client
 subvolumes brick4 brick3-afr
 option replicate *:2
endvolume

So that everyfile has a copy of itself on two diferent servers?

Best regards,
Daniel Colchete
_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel


No, If you observe the following:
### CLIENT client.vol ####
volume brick1
type protocol/client
option transport-type tcp/client
option remote-host 172.16.30.11
option remote-port 6996
option remote-subvolume brick
end-volume

volume brick1-afr
type protocol/client
option transport-type tcp/client
option remote-host 172.16.30.12
option remote-port 6996
option remote-subvolume brick-afr
end-volume

brick1-afr is actually on the 2nd server. I just deviated from the
naming conventions used at our wiki. But concept is still the same.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]