[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Vrs-development] Cluster Management Message (was CVS)
From: |
Chris Smith |
Subject: |
Re: [Vrs-development] Cluster Management Message (was CVS) |
Date: |
Tue, 19 Feb 2002 09:47:18 +0000 |
On Monday 18 February 2002 18:11, Bill Lance wrote:
> Forgive me for being so dense, but I'm sill confused.
> Are you using 'domain' in the sense of discovering an
> IP address, or as a synonym of an IP namespace?
Goldwater Domains.
Have a look at http://www.nfluid.co.uk/goldwater/ for a refresher.
The basic premise of my current thinking being that if disperate Goldwater
Apps can naturally interoperate through GWDomains (I'll use that term from
now on to avoid confusion :o) ) then why not have a look and see if we can
build the LDS cluster (ie a VRS) on top of this layer. GWDomains then handle
the whole network thing for you. All you do in the LDS implementation is
call Goldwater Services (from now on called GWServices....) You don't need
to know 'where they are' - GWDomains take care of the routing for you - you
don't even need know there is a network involved anywhere!!
[A note for ME.... remote GWDomains being detected as available/unavailable
from within a local GWDomain Handler need to generate a GWEvent. This event
may be subscribed to by an LDS GWService to propergate this information to
the Service Discovery Server].
We really need to talk about dotGNU service discovery too. Some banter has
passed through dotGNU developers list, but we need to work towards something,
as the VRS (and its access points) will only be locatable via the discovery
server. Maybe assign this as a specific task.
> > Yep cluster traffic.
> > If we're not sending 'cluster' traffic over HTTP,
> > then we need some sort of
> > network server. Phoenix is a very stable,
> > efficient, bolt on functionality
> > server that already integrates into Goldwater apps
> > (even though I've spent
> > the last few weeks trying to seperate them!)
> >
> > Basically you write a module containing a
> > read_handler and event_handler that
> > performs the network server functionality you
> > require, assign a port number
> > to it and get phoenix to load it on start up.
>
> the inet function ...
Now that is interesting. One of the idea's I had for Phoenix a few years
back was to build a (small) set of inet modules. POP3/finger/blaa blaa.
Started a POP3 server - it worked but was not very efficient.
> > There are some rules to comply with when writing such a module as
> > Phoenix is an internally multiplexed, thread free, fork'less server.
>
> Is this by intent? What do you see as the pros and
> cons of this?
Phoenix is a network portal. It is an engine that binds to multiple ports/ip
addresses and calls *your* service module when a client connecting to *your*
port require servicing.
Phoenix running as a SINGLE server which handles ALL connections is very
efficient and scales really really well (some work needs to be done to boot
multiple instances of the engine for SMP support). It can flood a 100Mb
network before it gets anywhere near choking if you write your Phoenix
modules properly. Basically Phoenix modules are just like interrupt
handlers. Whilst a module is processing a single connection, no other
connections are being processed. You must keep your processing time to a
minimum, but you do get as many chances as you need to finish processing a
single connection. The Phoenix API forces you to be mindful of this, and to
date has never been a problem.
[ At Companies House it currently serves approx 4.5 million requests a week
(0700-2359 6 days a week) I really should find out what the peak rate it......
In this period it appears to consume something of the order of 8 hours of cpu
time (which works out to approx 9300 requests per cpu minute) - and it gets
hammered! ].
So Pros: Scales very well, low CPU and memory footprint, swaps very quickly,
low connection start up time. Ideal for serving a very large number of
connections with short data bursts.
Cons: Doesn't make use of additional processors in SMP servers. If it dies,
it drops all connections (in 4 years of continuous use it has never aborted.
Honest!) though if it does, it restarts straight away. Not idealy suited to
a large number of connections being served large blocks of data. Still needs
SSL support.
Again have a look at http://www.nfluid.com/goldwater/ for a synopsis of
Phoenix (though in that reference it is discussed as the network access point
for remote Goldwater applicaitons).
I'm not trying to sell Phoenix - but we need some sort of 'server', and a web
server is no good, so we'll probably have to write one. Phoenix is a generic
engine that handles all network comms and passes data to 'modules' via 'read'
and 'event' function hooks. What the module does is up to your design
talents!
A web server is fine for client access to the LDS (VRS).
Hope that explains stuff!
Chris
--
Chris Smith
Technical Architect - netFluid Technology Limited.
"Internet Technologies, Distributed Systems and Tuxedo Consultancy"
E: address@hidden W: http://www.nfluid.co.uk
- [Vrs-development] Cluster Management Message (was CVS), Chris Smith, 2002/02/18
- Re: [Vrs-development] Cluster Management Message (was CVS), Bill Lance, 2002/02/18
- Re: [Vrs-development] Cluster Management Message (was CVS), Chris Smith, 2002/02/18
- Re: [Vrs-development] Cluster Management Message (was CVS), Bill Lance, 2002/02/18
- Re: [Vrs-development] Cluster Management Message (was CVS),
Chris Smith <=
- Re: [Vrs-development] Cluster Management Message (was CVS), Bill Lance, 2002/02/19
- Re: [Vrs-development] Cluster Management Message (was CVS), Chris Smith, 2002/02/20
- Re: [Vrs-development] Cluster Management Message (was CVS), Bill Lance, 2002/02/20
- Re: [Vrs-development] Cluster Management Message (was CVS), Chris Smith, 2002/02/21
- Re: [Vrs-development] Cluster Management Message (was CVS), Bill Lance, 2002/02/21
- Re: [Vrs-development] Cluster Management Message (was CVS), Chris Smith, 2002/02/21
- Re: [Vrs-development] Cluster Management Message (was CVS), Bill Lance, 2002/02/21