fastcgipp-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Fastcgipp-users] how does fastcgi++ support multiple concurrent req


From: Javier de la Dehesa
Subject: Re: [Fastcgipp-users] how does fastcgi++ support multiple concurrent requests ?
Date: Wed, 05 Dec 2012 13:53:34 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0

El 04/12/12 21:59, Chang-Jian Sun escribió:
> Javier/Eddie, Thanks a lot for your quick reply !
> 
> Right. spawning new thread for each request won't scale. no doubt that
> thread pool will be the better solution.
> I'm actually looking for a solution to support pushing event back to
> browser. FastCGI++ had shown a nice request-response model, but I got
> some questions on how to support push in FastCGI++
> 
> 1. Can I cache the request FCGX_Request* to keep the connection open
> and send event back to client as needed ? Is this the right approach
> to support push ?  Is this so-called "long polling" ?
> 

Actually, the example I put before kind of do that. Take a look at the
delayed response tutorial (
http://www.nongnu.org/fastcgipp/doc/2.1/a00001.html ). You can keep the
request open as long as you need returning false in response() method
until you actually want to send something. And yes, that is "long
polling", and, although it is not what you could strictly call "real
push", it is a fairly common approach (just be careful with firewall and
server configuration, don't let them close inactive TCP connections).

> 2. If so, When/How does server close this request ? Can it send
> multiple response ?

The request is closed when true is returned by response() method, then
the data written to out is sent (actually, I can't tell if no data is
sent until true is returned, or if the data is sent as it is written to
out).
I am not sure what you mean with "multiple response". You can only send
one response per request, these are HTTP rules. But you can make a
response as long as you want and you can form it in several response()
calls.
If you are thinking of long polling, the usual thing is to send a
request, wait, and when you get some response, process it (or whatever)
and send a new request to wait.

> 3. If this is not the right approach, what would be right approach to
> support push in FCGI ? or I shouldn't even use FCGI ?
> 

I myself will have to deal with push features in my system and this is
probably the approach I would take. Probably for a "real push" solution
you should deal with lower levels, or some other protocol/library not
HTTP based. You could take a look to some HTTP push techniques, also
known as Comet ( http://en.wikipedia.org/wiki/Comet_(programming) ), but
there is no "silver bullet".

> 4. If the client is disconnected (say user closes browser), How does
> the FCGI process get notified the request is longer valid ? Is there
> any callback for that ?
> 

Well I don't know a lot about the library internals, but I don't really
see any callback for that. The transceiver attribute is the low level
connection manager, but it is private and don't seem to provide this
precise information. state attribute, on the other hand, is the record
type as defined in FastCGI protocol (
http://www.fastcgi.com/devkit/doc/fcgi-spec.html ), which I have not
been able to determine if could be useful for this purpose or not;
anyway, this is private too.
Surely, Eddie's mastermind could provide more insight here.

> 5. How does FastCGI++ support XMLHttpRequest and JSON format ?
> 

Ajax request don't work any different from any other requests. The
library process by itself multipart/form-data and
application/x-www-form-urlencoded MIME types. If you want to use some
other protocol (some other MIME type), you should override inProcessor()
method. I am not sure if it appears in the docs, take a look at
include/fastcgi++/request.hpp in the source code.
The code I put before uses it too so you can take it as a example.
Sepaking of which, you will need the git version of the library for that
code to work, as the tar.gz available for download declares postBuffer
attribute in Http::Environment as private, so you won't be able to
retrieve POST data.
If you are speaking of sending XML/JSON in the response (not in the
request), you can just write the data to out and send it to the client.
I am not sure if you can set the Content-Type header of the response (I
mean, I was not able to find a way). This should not be a big problem if
your are not too picky on the client side, but some web browsers can be
picky as hell. I'm afraid I don't have a good answer here.

> Very appreciate your education !
> 
> Regards, -CJ
> 
> 
> On Tue, Dec 4, 2012 at 8:40 AM, Javier de la Dehesa <address@hidden> wrote:
>> El 03/12/12 18:49, Eddie Carle escribió:
>>> On Sun, 2012-12-02 at 01:48 -0500, Chang-Jian Sun wrote:
>>>> I tested several examples in fastcgi++ (very nice API !), but didn't
>>>> find any examples to show how it can process multiple concurrent
>>>> requests.
>>>>
>>>> It seems that fastcgi process handle one request at a time. In the
>>>> echo.cpp example, if I added sleep(10) in Echo::response(), and
>>>> started multiple connection in browser, the requests are processed
>>>> sequentially.
>>>>
>>>> I thought the fastcgi++ main() should support something like this:
>>>>
>>>> while (true)
>>>> {
>>>>     accept new request
>>>>     spawn new thread (or assign to worker thread) to process this
>>>> request
>>>> }
>>>>
>>>> This will allow processing concurrent requests. Am I conceptually
>>>> wrong ? Thanks a lot!
>>>
>>> Well, just because something is running in a separate thread doesn't
>>> necessarily mean that it is actually running concurrently. Not to
>>> mention that the cost of setting up and managing many threads is quite
>>> high. It is fairly well established that the thread per request model
>>> does not scale very well.
>>>
>>> The idea with fastcgi++ is that requests are given control over
>>> execution and then when they are idle, they return it. One would never
>>> call sleep(10) in a request as it would block. One would start a timer
>>> and immediately return from response(). Then when the timer is complete,
>>> response() would be called again to complete the request. If you peruse
>>> through the examples you'll notice a timer example that shows how it
>>> works.
>>>
>>> Do some apache benchmarking with tradition libfcgi calling a thread per
>>> request and compare it to fastcgi++ doing things "sequentially" and the
>>> results may surprise you.
>>>
>>
>> I am using FastCGI++ too and needed the ability to process several
>> concurrent requests. As Eddie states, creating a new thread for each
>> request do not scale well. My approach is to use some kind of simple
>> thread pool, so I can process a number of simultaneous requests without
>> needing to create new threads. I am attaching some sample code - not
>> that is really good or anything, but it may be useful as a guide (I've
>> just made it up from my real code, so there could be some flaws here and
>> there).
>>
>> There is a generic BlockingQueue class used to enqueue the requests -
>> class ThreadedRequest, which implements deferred response enqueuing
>> request in the queue. RequestDispatcher implements a singleton thread
>> pool that process requests and send responses.
>>
>> Constant NUM_WORKERS define the amount of threads in the pool, and in
>> line 208 you should put the code to process your request. If you have
>> any question about it please ask.
>>
>> --
>> Javier de la Dehesa


-- 
Javier de la Dehesa



reply via email to

[Prev in Thread] Current Thread [Next in Thread]