gnustep-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: about RunLoop, joystick support and so on


From: Richard Frith-Macdonald
Subject: Re: about RunLoop, joystick support and so on
Date: Tue, 13 Feb 2007 12:44:31 +0000


On 13 Feb 2007, at 11:03, Xavier Glattard wrote:

Richard Frith-Macdonald <richard <at> tiptree.demon.co.uk> writes:

On 11 Feb 2007, at 19:18, Xavier Glattard wrote:
(...)
No, in fact both backends use the runloop and both use
GSRunLoopWatcher which works perfectly.  The fact that both backends
at certain points chose to poll their respective message queues
without asking the runloop to tell them whether there is anything
available does not imply anything about

You're right : it works !
Well... most of the time.
GNUstep is not perfect. For instance it can't manage properly a intensive use of
performers or events, neither w32 nor x11 (see my openGL test tool).

I would need more details to know what you mean here. it may be that what you are seeing as a problem is actually a misunderstanding about what the code is supposed to do.

Actually I'm *hoping* that Apple will release something to specify
clearly how an NSStream subclass (other than those subclasses Apple
provide of course) can be tied in to an NSRunLoop.  Such an API would
be able to take the place of GSRunLoopWatcher.  Without that API you
can't write new subclasses of NSStream and get their event handing
code called when an event becomes available, you can only base code
on existing NSStream implementations.

All you have to do to subclass NSStream is at Apple: look at NSStream,
NSOutputStream ans NSInputStream reference pages.

I was talking about how to get it to interact with an NSRunLoop effectively ... ie how to get a runloop to trigger an event on the stream when the low level operating system event occurs. eg. how do you set things up to trigger a stream event when a semaphore flips or a shared memory segment is updated.

eg. if your joystick driver provides an interface like a file (normal
on unix and quite common on windows) then you can probably just open
an input stream using the relevant file device, but if the joystick
has to be accessed signals or shared memory then you have to use the
GSRunLoopWatcher api directly.

I would have to subclass NSInputStream to tell it to get the data from the
memory/signal/system call instead of a file. That's all.

But you need a mechanism to tell the runloop that the event has occurred (ie that there is some data for the stream to get) unless you are talking about repeated active polling ... which is horribly inefficient of course.

What does a GSRLWatcher that a NSStream doesnt ?

On windows ... windows messages.

The only difference I see is that GWRLWatcher handles _blocks_ of bytes while
NSStream handles _streams_ of bytes.

No, a watcher does not handle bytes at all. It detects whether an event has occurred (eg a file descriptor becoming readable) and informs its delegate so that the delegate can handle any data transfer.


(...)
Something like that :

                 |-- GSWin32EventStream <GSBackendEventStream>
NSInputStream <--|
                 |-- GSX11EventStream   <GSBackendEventStream>

[GSDisplayServer -eventStream] would be called by NSApp to get a
NSInputStream<GSBackendEventStream>. Then NSApp would register
itself as the
delegate of this NSStream and schedule it in the runLoop.

A GSX11EventStream would get the event with XPending/XNextEvent
when the runLoop
polls, then translate it into a NSEvent and send it back to the loop.
A GSWin32EventStream would get the event from another NSStream
(which would only
select the window message from PeekMessage), then translate and
send it back to
the loop.

Not so simple ? Yeah probably

That sounds quite nice,

Thanks :o)

but basically it's reorganising the incoming
half of the GSDisplayServer so that instead of adding events to the
application's event queue directly, it provides  a stream of NSEvent
objects for the application to put in the event queue itsself.  In
both cases the internal workings would need to be pretty much the
same ...
1. ask the runloop to inform the GSDisplayServer instance when there
is data available
No : the NSApp needs the data. The server knows if data is available.

2. when data is available, pull it off the X event queue or windows
message queue
No : the server pull off the event and give it when asked

What you seem to be describing here is either a blocking or an actively polling model ...
the app asks the server for an event,
the server checks the windows or X queue and if there is an event, parses and returns it.
If there is NOT an event, then either
1. (polling) a nil event is returned and the app asks again or
2. (blocking) the server waits until an event arrives, then returns that

The problem with 1 is that your app uses the entire cpu of your systems repeatedly polling (unless you set a timer and only poll occasionally, in which case events can be late because you have to wait for the timer to expire before you poll for them). The problem with 2 is that your app hangs waiting for events to arrive when it should be doing other things ... and the GUI API is designed for non-blocking/event-driven operation, so this would be bad.

3. make sense of the data, generating the corresponding NSEvent
object or objects (if any)
Yes.

4. add the event to the event queue (current implementation) or store
the event in memory owned by a stream object and trigger an event
handler in the NSApplication code so that the NSApplication can read
the event from the stream and put it in the event queue.
Yes and no : NSApp needs no event queue.

If you look at the API and documentation you will find that it *has* an event queue. Sure, the whole system could be designed and implemented differently, but that's what the design of the OpenStep/ MacOS-X/GNUstep AppKit/GUI library happens to be.

If different sources of NSEvents were going to be used by lots of
different pieces of code, the encapsulation of all the work inside a
stream interface would be really useful/clean/simple.  However, in
practice we have a single source of events for a display server, and
all those events go into the application's event queue and are pulled
out of that queue by the code which needs the events,

But the application has to handle many other events than those get from the GUI
: timers and performers. And socket/network messages, and so on.

I thought we were taking about gui events (NSEvent objects).
Certainly we handle other events via callbacks from the runloop too ... but they are not connected to the issue of handling the NSEvens in a stream subclass.

and this model
of operation is inherent in the AppKit/GUI API.   So I don't think
wrapping events in an NSStream subclass before putting them in the
event queue is any help.

NSStream is only a better GSWatcher. That's not the main problem.

No, NSStream deals with bytestream oriented I/O, and as part of that can be scheduled in a runloop to be told when I/O is possible. GSRunLoopWatcher provides a mechanism to let objects be notified when I/O is possible. Because the mechanism by which the runloop knows what events should trigger the stream event handler is undefined in the MacOS-x NSStream and NSRunLoop documentation, we use the GSRunLoopWatcher to provide that linkage.


The integration of NSStream in the NSRunLoop code might be a good opportunity to make some other changes and simplifications in the event management code.

Here is my 2cts suggestion.

At least i think the DispathMessage/windowProcedure should be hidden in a more abstract mechanism. Without the window procedure the win32 backend would be more like the x11 one, and with enough abstraction (that NSStream might gives) some
more code could be shared.

Maybe ... but if we want windows programmers to work on the windows backend then it might be better to keep things as 'normal' for them as possible rather than trying to hide the windows stuff.

The use of NSRunLoop could also be extended. For instance [GSDisplayServer -getEventMatching...] contains its own loop : NSEvents are first got from a queue owned by the server, then the runLoop is asked for timers and performers. The event queue of the server is filled earlier by a message from the runLoop to
receivedEvent. I think the runLoop should manage all this
and the server should
only have to be an interface with the GUI system (isPendingEvent, getEvent, dot.). As the events got from the system have to be translated and and filtered,
the server have to manage a queue.

The runloop can't manage X event messages (because it knows nothing about X), so I guess you mean that the NSStream subclass would manage everything. That really just means that you are taking all the code for handling incoming events from the GSDisplayServer subclass and putting it in a different class (a subclass of NSStream). Of course, in order to convert from X events to NSEvents that stream subclass will also need to know something about the state of requested made by the GUI to the backend, and will need other internals of GSDisplayServer (eg to convert from X window coordinates to NSWindow coordinates), so the code still in the GSDisplayServer subclass will have to be tightly coupled with the code in the NSStream subclass. And the NSStream subclass will not be usable except in conjunction with the GSDisplay server subclass ... so all that has been achieved is to split the GSDisplayServer into two slightly more complex parts.

Here is roughly what's in my head :

 NSApplication  NSRunLoop  GSEventStream  GSEventServer  NSWindow
    ---           ---          ---            ---         ---
     |
     |eventStream
     |---------------------------------------->|
     |
     |setDelegate: self
     |------------------------->|
     |
     |scheduleInRunLoop
     |------------------------->|
     |
     |acceptInput
     |------------>|
                   |hasBytesAvailable
                   |----------->|DPSPendingEvent (?)
                   |            |------------->|
                   |            |              |isQueueEmpty
                   |            |              |----\
                   |            |              |<---/
                   |            |              |
                   |            |              |fillEventQueue
                   |            |              |----\
                   |            |                   |
                   |            |              |<---/
                   |            |              |
                   |            |              |idPendingEvent (*)
                   |            |              |----\  PeekMessage
| | |<---/ or XPendingEvent
                   |            |              |
| | | getAndDecodeSystemEvent (*)
                   |            |              |----\  GetMessage
                   |            |              |<---/   or XNextEvent
                   |            |              |
                   |            |              |enqueueEvent
                   |            |              |----\
                   |            |              |<---/
 stream:handleEvent|
     |<------------|
     |           (...)
     |read
     |------------------------->|DPSGetEvent
     |                          |------------->|
     |                          |              |dequeueEvent
     |                          |              |----\
     |                          |              |<---/
     |sendEvent
     |---------------------------------------------------->|


(*) Only two methods are specific to the underlying system. - getAndDecode is the
 main part.

The runLoop wouldn't know anything of the backend : it'd only have to manage
streams.
The server wouldn't know anything of the runLoop nor the NSApp : it'd only have
to manage the events from the system.

Ideally the real server shouldn't know anything of the AppKit but what is included in GSDisplayServer.h. But that would only be true in a Perfect World(tm)
Some bad examples in X11 backend :
  [NSApp deactivate]
  [NSApp mainMenu]
In the w32 backend :
  [NSApp terminate: nil]
  use some Panels, includes NSText, NSMenu, NSTextField, ...
  get many notifications from NSApp

IMO it is legitmate for the backend code to make use of the public APIs of the frontend.

I know : the backend does this because it needs to do this. But It should not need to do this. The 2 backends are so different that their behaviors are realy
different sometimes. Most of this should be done by NSApp.

Certainly we should try to get behaviors to be as similar as is practically possible.

....

I know... I just come here and start to criticize :-p
But i only want to help because I like GNUstep :-)
My ideas are probably not perfect, but i'd be happy to talk about all that again
and again. And help to do job.

I'm not convinced that wrapping backend specific event handling inside an NSStream would be an improvement. After all, the NSStream API is intended for fairly low-level byte-stream oriented I/O rather than to implement an event queue. Surely it would be better to clean up the existing code in the two backends, and continue to use the existing event queue. Even if we did want to go to a stream based queue I expect it would be better to clean up existing code first, and use the cleaned up code as the basis for any change.

identifying places where the behaviors of the backends differ, and trying to make them consistet would be really good.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]