gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] multitaskhttpd experiment


From: Sebastian Hilbert
Subject: Re: [Gnumed-devel] multitaskhttpd experiment
Date: Thu, 15 Jul 2010 12:24:58 +0200
User-agent: KMail/1.13.3 (Linux/2.6.33-6-desktop; KDE/4.4.5; i686; ; )

Am Donnerstag 15 Juli 2010, 12:03:12 schrieb lkcl:
> Sebastian Hilbert wrote:
> > Am Mittwoch 14 Juli 2010, 22:28:55 schrieb lkcl:
> > 
> > Hi,
> > 
> >> rrright, i even solved the problem of lynx not working: i made the
> >> back-end
> >> server support HTTP Keep-Alives which i should have done in the first
> >> place.
> >> 
> >> apply these:
> >> http://lkcl.net/gnumed/0005-add-keep-alive-version-of-list-server-direct
> >> ory -file.patch
> >> http://lkcl.net/gnumed/0006-add-keepalive-version-of-send-error.-demo-sh
> >> ow ing-th.patch
> > 
> > The second patch is missing on your server. I cannot download it.
> 
> get all of them here.
> http://lkcl.net/gnumed/

Done.

> 
> Sebastian Hilbert wrote:
> >> then git pull on multitaskhttpd repo
> >> 
> >> then do:
> >> $ cd multitaskhttpd
> >> $ python proxyapp.py &
> > 
> > I must be doing something wrong here since git pull claims it is up to
> > date
> > and still does not have proxyapp.py
> > 
> > Can you send me the git clone line again so I am using the correct one ?
> 
> sorted.  forgot to set git-export-daemon-ok file, and the cron job wasn't
> working.
> 

Done. 

> Sebastian Hilbert wrote:
> >> this will redirect traffic from http://127.0.0.1:8080 to
> >> http://127.0.0.1:60001
> >> 
> >> then do:
> >> cd gnumed/client
> >> ./gm-from-vcs.sh --ui=mtweb &
> > 
> > I will try this when I have all the file. So you want me to try mtweb
> > again,
> > not the pxweb you introduced a while back ?
> 
> correct.  forget pxweb.
>

Understood. I did use the wrong one since your patches indicated you are still 
working in the ProxiedWeb directory.
 
When is use --ui=mtweb it bails out with

ImportError: No module named MultiTaskWeb

I did not see any of youre patches create that directory but I will have a 
look at them again.

> Sebastian Hilbert wrote:
> >> this will start up the SimpleJSONRPCServer.py on port 60001
> >> 
> >> then do python testjsonrpc.py
> >> 
> >> and you should be rewarded with two successful queries and a prompt
> >> *inside* SimpleJSONRPCServer asking for a username and password.  this
> >> is *correct* because there should be an exception thrown instead.
> > 
> > Good to know.
> > 
Not there yet until I figure out the mtweb stuff.

> >> if you also browse to http://127.0.0.1:8080 you should see a listing of
> >> the
> >> current directory, and at the bottom a "Cookie: hexdigest" shown. check
> >> also the "send_head. pid: NNNN" number and make a note of it.  if you
> >> then
> >> *close* the web browser down entirely and re-visit the page, you should
> >> note that the same hexdigest is shown. also, double-check that the debug
> >> info from SimpleJSONRPCServer "send_head. pid: NNNN" is the *exact* same
> >> number.
> > 
> > So the cookie stays there and is read in again when the browser gets
> > opened
> > agian ?
> 
> yep.
> 
Understood. Currently with pxweb cookie says: first none, then

Cookies: session=25debdb75c454251a5984ea898473c5c; 
session=25debdb75c454251a5984ea898473c5c; 
session=25debdb75c454251a5984ea898473c5c

But I guess all this info is useless unless I get the mtweb stuff to run

> Sebastian Hilbert wrote:
> >> then, open a 2nd web browser, you should now get a 2nd hexdigest and a
> >> 2nd
> >> pid: NNNN number.  refresh the first browser, the two should be
> >> separate, all happy, regardless of how many times you exit the web
> >> browser(s).
> > 
> > That you have to explain to me. Two seperate browsers have two seperate
> > hexdigest values (which is intended) but closing and reopening does not
> > create
> > one despite that I would *assume* that closing and reopening would be
> > equal to
> > a thrird or fourth browser ?
> 
> yes.  you've assumed :)  i set the flags on the cookie (Expires={some
> future date}) to be 5,000 seconds into the future.  the browser _will_
> return that cookie, _even_ if the browser is closed.  until it expires.
> 
Understood.

> Sebastian Hilbert wrote:
> > Please correct me if I am wrong. Is it correct that the current code
> > handles
> > the situation that a connection will not get lost even if the user is
> > inactive
> > (but lets the browser window stay open )?
> 
> there are too many negatives for me to translate this correctly and
> accurately, but i will do my best to make some statements that may help.
> the code i have implemented as it stands does not do timeouts or exits.  as
> long as the front-end proxy is running and as long as the back-end service
> is running, each back-end process will live FOREVER.  there is currently NO
> mechanism to tell the back-end process to "die".
>

That was helpful. 
 
> in other words, if the user clears their browser cache and the session
> cookie is lost, the back-end process will *still* be there, waiting forever
> for them to come back and use it.
>   

> Sebastian Hilbert wrote:
> >  Is it also true that the current
> > 
> > code will handle the situation that when a user closes down the browser
> > he will be asked for credentials when opening a new browser instance ?
> 
>  NO.  because the cookie i set to "persistent", when the user returns, as
> long as they have not cleared the cookie cache they will be RECONNECTED to
> the still-running service instance in the back-end server.
> 
>  if you want that particular behaviour, you will have to implement it
> MANUALLY because it is impossible to tell the difference between a user
> *deliberately* closing the browser and "the internet just happening to go
> away for a few seconds or minutes" and "normal HTTP 1.0 non-keep-alive
> disconnected traffic".

Understood. A while back you told me about the backend processes that could 
accumulate over time. Because of that I envisioned some frontend - backend 
communication that would kill the backend process if the client instance goes 
away for whatever reason. If someone was on flaky internet connection they 
could just set a high timeout (e.g. 5 minutes) But I doubt that anyone would 
be working with a system that needs 5 minutes to return a request.

Sebastian



reply via email to

[Prev in Thread] Current Thread [Next in Thread]