[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Fastcgipp-users] how does fastcgi++ support multiple concurrent req

From: Chang-Jian Sun
Subject: Re: [Fastcgipp-users] how does fastcgi++ support multiple concurrent requests ?
Date: Tue, 4 Dec 2012 15:59:34 -0500

Javier/Eddie, Thanks a lot for your quick reply !

Right. spawning new thread for each request won't scale. no doubt that
thread pool will be the better solution.
I'm actually looking for a solution to support pushing event back to
browser. FastCGI++ had shown a nice request-response model, but I got
some questions on how to support push in FastCGI++

1. Can I cache the request FCGX_Request* to keep the connection open
and send event back to client as needed ? Is this the right approach
to support push ?  Is this so-called "long polling" ?

2. If so, When/How does server close this request ? Can it send
multiple response ?
3. If this is not the right approach, what would be right approach to
support push in FCGI ? or I shouldn't even use FCGI ?

4. If the client is disconnected (say user closes browser), How does
the FCGI process get notified the request is longer valid ? Is there
any callback for that ?

5. How does FastCGI++ support XMLHttpRequest and JSON format ?

Very appreciate your education !

Regards, -CJ

On Tue, Dec 4, 2012 at 8:40 AM, Javier de la Dehesa <address@hidden> wrote:
> El 03/12/12 18:49, Eddie Carle escribió:
>> On Sun, 2012-12-02 at 01:48 -0500, Chang-Jian Sun wrote:
>>> I tested several examples in fastcgi++ (very nice API !), but didn't
>>> find any examples to show how it can process multiple concurrent
>>> requests.
>>> It seems that fastcgi process handle one request at a time. In the
>>> echo.cpp example, if I added sleep(10) in Echo::response(), and
>>> started multiple connection in browser, the requests are processed
>>> sequentially.
>>> I thought the fastcgi++ main() should support something like this:
>>> while (true)
>>> {
>>>     accept new request
>>>     spawn new thread (or assign to worker thread) to process this
>>> request
>>> }
>>> This will allow processing concurrent requests. Am I conceptually
>>> wrong ? Thanks a lot!
>> Well, just because something is running in a separate thread doesn't
>> necessarily mean that it is actually running concurrently. Not to
>> mention that the cost of setting up and managing many threads is quite
>> high. It is fairly well established that the thread per request model
>> does not scale very well.
>> The idea with fastcgi++ is that requests are given control over
>> execution and then when they are idle, they return it. One would never
>> call sleep(10) in a request as it would block. One would start a timer
>> and immediately return from response(). Then when the timer is complete,
>> response() would be called again to complete the request. If you peruse
>> through the examples you'll notice a timer example that shows how it
>> works.
>> Do some apache benchmarking with tradition libfcgi calling a thread per
>> request and compare it to fastcgi++ doing things "sequentially" and the
>> results may surprise you.
> I am using FastCGI++ too and needed the ability to process several
> concurrent requests. As Eddie states, creating a new thread for each
> request do not scale well. My approach is to use some kind of simple
> thread pool, so I can process a number of simultaneous requests without
> needing to create new threads. I am attaching some sample code - not
> that is really good or anything, but it may be useful as a guide (I've
> just made it up from my real code, so there could be some flaws here and
> there).
> There is a generic BlockingQueue class used to enqueue the requests -
> class ThreadedRequest, which implements deferred response enqueuing
> request in the queue. RequestDispatcher implements a singleton thread
> pool that process requests and send responses.
> Constant NUM_WORKERS define the amount of threads in the pool, and in
> line 208 you should put the code to process your request. If you have
> any question about it please ask.
> --
> Javier de la Dehesa

reply via email to

[Prev in Thread] Current Thread [Next in Thread]