vrs-development
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Vrs-development] Cluster Management Message (was CVS)


From: Chris Smith
Subject: Re: [Vrs-development] Cluster Management Message (was CVS)
Date: Thu, 21 Feb 2002 12:05:22 +0000

On Wednesday 20 February 2002 16:31, you wrote:

> If they transverse a network, doesn't by definition
> mean that the travel from socket to socket? And that
> means some port number.
>
> > So I would say that "a request to any of these 
> > ports results in the same response" is false.
> > Each port supports different traffic.
>
> Is that a GW characteristic that services are
> preassigned a specific port?

Oops - I've misslead you. The GWDomain controller is
assigned a single specific port through which it perfoms
all inter-domain communications.  If you had a single
issolated GWDomain that never wanted to contact any
others then you wouldn't need this port at all.  
It's a gateway port to the outside world (well a world
of other GWDomains).

> That would mess us up a bit in the Cluster idea.
> My thought was that all
> Cluster Management Messages travel between the same
> port, and get routed to the right LDS module by the
> Port Manager, perhaps in the form of Pheonix.  That
> manager would listen to port:80 for service traffic
> and port:xx for CMM stuff.

I might have initially misunderstood you, I thought
you were saying that a message appearing at either port
would cause the same response - which won't hold true
if one is reserved for CMM traffic and the other for
webService request traffic.

> Well, let me clarify a point.  I have talked about two
> different set of data.  The data placed in the
> Repository is "segmented, distributed and mirrored".
> However, the Cluster Image Data is NOT segmented.  It
> IS distrubuted and mirrored, but not to disk, only to
> in-memory tables in all LDS's

Yeah, sorry, I was forgetting the concept of Cluster 
Image Data... 

> Your right on here, the classic issue would be
> synching the data adequately.  Of course, we are all
> familiar with locks, and the feared 'deadly-embrace'
> they imply.
>
> I had mention some time ago that I often look to
> biology for inspiration.  There are two fundamentally
> different ways that a biological organism coordinates
> itself.  The one most familar to folks, and closest to
> computer science, is the nervous system.  The other is
> chemical, and is far more fundamental to a cell and
> organisms life.  There are organisms without nervous
> systems of any kind.  This form uses the movement of
> molecules as messangers.
>
> What distinguishes this form from the nervous system
> is that it is both 'loosely coupled' and highly
> specific.  Using an undirected messagner, a chemical
> released in some loose transport media (i.e. the
> blood) makes it 'loosely coupled'.  But the message
> carried is very specific, once the messanger arrives.
>
> This is the opposite of the nervous system, where the
> coupling is very tight, a one to one wire, but the
> message is very simple, like "Zap. Your it".
>
> How might the 'loosely coupled, specific message' idea
> translate to software design?  Damn good question.

Broadcast events. And you can't guarantee that they'll
arrive either.  Which is the same for a chemical response.
Chemical messages are also slow.  TBH, developing a
system that propogates changes in this fashion is much
easier (IMHO) than the instant 'MUST BE SYNCHRONISED'
type.
You've just gotta watch your step and still offer out of
date data if whoever is asking for it clearly has no
idea that things have moved on.  You could let them know
at this point if you wanted too - if you've written a
friendly system with an eye on performance!

Chemical systems are a good analogy!!!!

> An example might be in the Pointer Table in the
> Repository that's posted already. At first glance, it
> seems to have a very ridgedly defined structure.  But
> it's awful damned casual in it's work.  The list of
> Mirror Block addresses can, in fact, be very fuzzy.
> The stack of addresses can be damaged, and it will
> still work.  Little homeostatic feedback loops, like
> the CRC check on a data block at the Cluster Block
> level, can catch bad blocks or missassigned blocks and
> fix them.

:o)

> The connections between the Cluster Block and the
> Mirror Blocks is loosely coupled.  But the CRC check
> makes the data of the message extreamly specific.
>
> Now, theoretically, this should make a far more robust
> system.  I guess the question is just when does fuzzy
> become mush.  I suppose we will just have to build it
> and test it to answer those questions.

Once you've got your find and locate algorithms worked
out, a paper dry run (lots of diagrams on a white-board
are good too!) will help.  It usually helps me!

> A Net service request to any level I or II LDS should
> see exactly the same list of available services, those
> posted to the Cluster Registry. Now, a particular LDS
> may end up specializing in a popular service, but only
> because it has already loaded the necessary dataset.
> It would have nothing to do with who owned the LDS
> host computer.

Ah, but how does a client that wants to invoke a
particular net service find an LDS to satisfy the
request?  There might be one, there maight be many.
If the repository is truly distributed, then any LDS's
in the cluster should be able to satisfy a request for
data within the repository.


> > As LDSs join the cluster, the cluser managment sort
> > out the cluster, but the new LDS needs to be added
> > to the UDDI thingy.
>
> There shouldn't be any relationship between what LDS's
> are online with what Net services that the Cluster
> offers.

Exactly.  My point is that the VRS is just a collection
of cooperating LDSs.  So if a client wants to access
a net service within the repository of an LDS cluster
then it has to contact one of the LDSs in the cluster.
Fine. But it'll need to lookup the net service in some
directory somewhere to find out 'where to go' for it.
If you only have one LDS advertised in this directory
and it happens to go off line, then you might still have
20 LDSs supportting the cluster, but the advertised
access point is now unavailable, and so your entire
cluster is unavailable.  Clients external to the
cluster cannot see it.  So for resilience, you should
at least advertise all LDSs that are willing to be
publically accessable from the Directory Service.
As more 'yes I will be public' LDSs come online, then
they too need to be added to the Directory Service.
If the VRS is designed to be dynamic, then the
Directory Service too needs to be dynamic to keep
up with the VRS.


I've got a feeling that you've been just talking about
LDSs accessing the other LDSs within the cluster.

I was looking from the point of view of an arbitary
remote client that wants to get at a resource that the
cluster is offering.

Hope we're talking about the same thing now! :o)

Perhaps setup an IRC chat soon?

Cheers
-- 
Chris Smith
  Technical Architect - netFluid Technology Limited.
  "Internet Technologies, Distributed Systems and Tuxedo Consultancy"
  E: address@hidden  W: http://www.nfluid.co.uk



reply via email to

[Prev in Thread] Current Thread [Next in Thread]