gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Feature requests of glusterfs


From: LI Daobing
Subject: Re: [Gluster-devel] Feature requests of glusterfs
Date: Fri, 4 Jan 2008 08:40:32 +0800

On Jan 4, 2008 3:16 AM, Krishna Srinivas <address@hidden> wrote:
> 2. nufa scheduler support more than one `local-volume'. Sometimes, more
>   than one child of unify is local. In this case, it's valuable to set
>   more than local-volume, and use them randomly, or in turn, or use the
>   second after the first is full.
> --
>
> Correct... can you raise a feature/bug request (though the priority
> will be less for this
> as of now)

Done.
https://savannah.nongnu.org/bugs/index.php?21944

>
> --
> 3. reduce the effect of network latency in afr. currently the afr write
>   the data to the children in serial. So the speed is heavily affected
>   by the latency of the network. How about add a new kind of afr, which
>   is the combination of afr and io-threads. In the new xlator, each
>   child is running in the separate thread, so the several send process
>   is running at the same time. So the speed is affected by the network
>   latency only one time (instead of several times).
> --
>
> Because write() call will be handled asynchronously, i.e when afr writes
> to the child which is protocol/client, we dont wait for this write to complete
> before calling write to the next child, so this is as good as what you
> are saying (afr + iothreads for subvolumes) right? or am I missing something?

YES.

>
> About your 4th point, from what I understand, this is what you want:
>
> client AFR knows about 3 machines M1 M2 M3.
> client AFR says write on M1. AFR on M1 writes to local disk and also
> writes to M2.
> Now AFR on M2 writes to local disk and also writes to M3.
> This will be a complex setup. Also to get 100 mbps you will have to have
> client-M1 and M1-M2 and M2-M3 on different networks. But if you are
> ready to have this kind of network, we can achieve 100 mbps with present
> combination of xlators that is already there now. i.e have server side AFRs on
> M1 M2 M3. Client will connect to M1 (M1 will have WB above AFR), so writes
> from the client can use full bandwidth. If M1 fails, client will connect to M2
> by DNS round robin (some users are already using this kind of setup
> on this list) so AFR on M2 will now write to M2 local disk and M3.

Thanks for your suggestion.

>
> > > And more, there is a comment near the end of definition of
> > > afr_sync_ownership_permission. This comment said that afr on afr wont
> > > work. This function is triggered by afr_lookup_cbk when self_heal is
> > > needed. And self_heal is very important for afr.
> > >
> > > Any one can help clear whether afr on afr has problem?
> >
> > Yes, thinking about it now, I an see at least one reason why it probably
> > wouldn't work (afr extended attributes clash).  The devs expressed
> > interest in chaining AFR before, so maybe it will become a reality in
> > the future.
>
> No actually clash of extended attributes does not cause problems.
> It is just that it is not implemented (needs code changes thats it)
> AFR over AFR used to work till directory selfheal was implemented.
> So it will definitely be made to work in near future.

Good news.

>
> Please get back if there are any doubts or corrections.
>
Thanks.



-- 
Best Regards,
 LI Daobing




reply via email to

[Prev in Thread] Current Thread [Next in Thread]