[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Gnumed-devel] multitaskhttpd experiment
From: |
lkcl |
Subject: |
Re: [Gnumed-devel] multitaskhttpd experiment |
Date: |
Thu, 15 Jul 2010 03:03:12 -0700 (PDT) |
Sebastian Hilbert wrote:
>
> Am Mittwoch 14 Juli 2010, 22:28:55 schrieb lkcl:
>
> Hi,
>
>> rrright, i even solved the problem of lynx not working: i made the
>> back-end
>> server support HTTP Keep-Alives which i should have done in the first
>> place.
>>
>> apply these:
>> http://lkcl.net/gnumed/0005-add-keep-alive-version-of-list-server-directory
>> -file.patch
>> http://lkcl.net/gnumed/0006-add-keepalive-version-of-send-error.-demo-show
>> ing-th.patch
>>
>
> The second patch is missing on your server. I cannot download it.
>
>
get all of them here.
http://lkcl.net/gnumed/
Sebastian Hilbert wrote:
>
>
>> then git pull on multitaskhttpd repo
>>
>> then do:
>> $ cd multitaskhttpd
>> $ python proxyapp.py &
>
> I must be doing something wrong here since git pull claims it is up to
> date
> and still does not have proxyapp.py
>
> Can you send me the git clone line again so I am using the correct one ?
>
sorted. forgot to set git-export-daemon-ok file, and the cron job wasn't
working.
Sebastian Hilbert wrote:
>
>
>>
>> this will redirect traffic from http://127.0.0.1:8080 to
>> http://127.0.0.1:60001
>>
>> then do:
>> cd gnumed/client
>> ./gm-from-vcs.sh --ui=mtweb &
>
> I will try this when I have all the file. So you want me to try mtweb
> again,
> not the pxweb you introduced a while back ?
>
correct. forget pxweb.
Sebastian Hilbert wrote:
>
>
>>
>> this will start up the SimpleJSONRPCServer.py on port 60001
>>
>> then do python testjsonrpc.py
>>
>> and you should be rewarded with two successful queries and a prompt
>> *inside* SimpleJSONRPCServer asking for a username and password. this is
>> *correct* because there should be an exception thrown instead.
>>
> Good to know.
>
>> if you also browse to http://127.0.0.1:8080 you should see a listing of
>> the
>> current directory, and at the bottom a "Cookie: hexdigest" shown. check
>> also the "send_head. pid: NNNN" number and make a note of it. if you
>> then
>> *close* the web browser down entirely and re-visit the page, you should
>> note that the same hexdigest is shown. also, double-check that the debug
>> info from SimpleJSONRPCServer "send_head. pid: NNNN" is the *exact* same
>> number.
>>
> So the cookie stays there and is read in again when the browser gets
> opened
> agian ?
>
yep.
Sebastian Hilbert wrote:
>
>
>> then, open a 2nd web browser, you should now get a 2nd hexdigest and a
>> 2nd
>> pid: NNNN number. refresh the first browser, the two should be separate,
>> all happy, regardless of how many times you exit the web browser(s).
>>
> That you have to explain to me. Two seperate browsers have two seperate
> hexdigest values (which is intended) but closing and reopening does not
> create
> one despite that I would *assume* that closing and reopening would be
> equal to
> a thrird or fourth browser ?
>
yes. you've assumed :) i set the flags on the cookie (Expires={some future
date}) to be 5,000 seconds into the future. the browser _will_ return that
cookie, _even_ if the browser is closed. until it expires.
Sebastian Hilbert wrote:
>
>
> Please correct me if I am wrong. Is it correct that the current code
> handles
> the situation that a connection will not get lost even if the user is
> inactive
> (but lets the browser window stay open )?
>
there are too many negatives for me to translate this correctly and
accurately, but i will do my best to make some statements that may help.
the code i have implemented as it stands does not do timeouts or exits. as
long as the front-end proxy is running and as long as the back-end service
is running, each back-end process will live FOREVER. there is currently NO
mechanism to tell the back-end process to "die".
in other words, if the user clears their browser cache and the session
cookie is lost, the back-end process will *still* be there, waiting forever
for them to come back and use it.
Sebastian Hilbert wrote:
>
>
> Is it also true that the current
> code will handle the situation that when a user closes down the browser he
> will be asked for credentials when opening a new browser instance ?
>
NO. because the cookie i set to "persistent", when the user returns, as
long as they have not cleared the cookie cache they will be RECONNECTED to
the still-running service instance in the back-end server.
if you want that particular behaviour, you will have to implement it
MANUALLY because it is impossible to tell the difference between a user
*deliberately* closing the browser and "the internet just happening to go
away for a few seconds or minutes" and "normal HTTP 1.0 non-keep-alive
disconnected traffic".
--
View this message in context:
http://old.nabble.com/multitaskhttpd-experiment-tp29154568p29171163.html
Sent from the GnuMed - Dev mailing list archive at Nabble.com.
- Re: [Gnumed-devel] multitaskhttpd experiment, (continued)
Re: [Gnumed-devel] multitaskhttpd experiment, lkcl, 2010/07/14
Re: [Gnumed-devel] multitaskhttpd experiment, lkcl, 2010/07/14