[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu3dkit-discuss] G3DRenderer - Object

From: Philippe C . D . Robert
Subject: Re: [Gnu3dkit-discuss] G3DRenderer - Object
Date: Mon, 17 Mar 2003 00:07:17 +0100

On Sunday, March 16, 2003, at 10:15  Uhr, Brent Gulanowski wrote:
Do you mean an aggregation of arbitrary, geometric primitives by 'Object'? The idea here is to use a compile action on the respective subtree which would generate an optimised (internal!) represenation for the renderer of choice. This is hence not an application specific optimisation, but application specific optimisations on the scene graph level will still be required to maximise rendering >> performances.

I meant the Object in the Renderman sense (beginObject: etc.). Whether that is created using a compile action, I cannot tell from the current interface.

The problem here is what do we want to expose and do we have to expose to 3DKit developers. Maybe some parts of this interface should only be useable by the 3DKit itself?

On Sunday, March 16, 2003, at 02:33  PM, Philippe C.D. Robert wrote:
Talking MVC, a scene graph is a data type and hence it provides the model only, actions such as the G3DRenderAction are the controllers which operate on this data, and the camera/view duo is the view which is used to visualise the data.

In MVC as I know it, a model is more than just data, it is also behaviour. A model object stores its own data and acts upon it. Controllers communicate with the model objects -- they do not operate on the data. If they do, it violates data hiding.

Data access happens via method invocation, so there is no data hiding violation. Here (as I see it) the graph is really only a data structure, otherwise you will run into severe problems afterwards.

In other words, the scene graph only contains the data to be used by the renderers to visualise the respective scene, this includes the geometry but also transformations, shaders and likewise data. Therefore you can for example render the same scene using different renderers to generate different results (in any aspects) without touching the scene data at all. So the model is indeed completely separated from the view.

This sounds somewhat contradictory. On the one had, the scene graph is little more than a container -- in which case, why have classes at all? On the other

Why wouldn't you use classes for data structures (containers)? This sounds weird to me.

hand, you claim that the renderer never touches the data. How is that possible? Something must be able to access it, while guaranteeing that the data is preserved and internally consistent. The renderer cannot take on that responsibility. In which case, the scene classes provide mutator and accessor methods, and nothing else? If a shape holds in it a set of polygon (attribute)s, say ten thousand, is this really going to be copied into a dictionary, and then read back out?

No, this is not what I meant. A render action traverses the graph and manages the internal gfx states, now whenever it reaches let's say a shape having a geometry node attached, it sets up all the states required for drawing as defined by the shape and then tells the geometry object to draw itself (by passing the correct renderer to be used by the geometry object). So the render action controls the rendering process but the geometry objects draw themselves. Does this make more sense to you?

You could help me in my confusion by describing the steps necessary to access a scene and draw something in a frame buffer, by sketching the call tree if possible, because I really cannot visualize it.

I'll try...

// setup a scene graph
scene = ....;

// we use OGL here as an example
openGLRenderer = [[G3DOGLRenderer alloc] init];

// setup a render action w/ the OGL renderer
ra = [[G3DRenderAction alloc] initWithRenderer:openGLRenderer];

// init and set the camera object
myCamera = [[G3DCamera alloc] initWithView:myView];
[ra setCamera:myCamera];

Now to render a scene the drawRect: method (for example) of myView tells the render action to traverse the scene graph and render it by calling

[ra apply:scene];

Rendering is done by sending the appropriate commands - based on the graph nodes state - to its renderer (state handling, transformations, ...) using the renderer API, ie. like

[openGLRenderer worldBegin];
[openGLRenderer tranformBegin];
[geometry drawGeometry: openGLRenderer];
[openGLRenderer tranformEnd];
[openGLRenderer worldEnd];

I want to draw a sharp distinction between what I consider the real scene data -- the things in the scene and their relation to one another, including scene level attributes -- and the -representations- of the things in the scene. You have me questioning whether such a distinction is possible, but I am confident that it *is* possible, and that the distinction is very important and could make a big difference in the design and performance of the software. If I can find that difference, I want to pull out all the representation data and let the renderer itself manage it, with help from data/file controllers. This will make the scene smaller and lighter, and leave the RenderKit to manage it more intelligently. It will make it easier to use different renderers to present the same scene, and even allow other kinds of views to pose as a renderer to present the scene data in a completely different fashion. But this is dependent on things that I cannot know by looking at the interface.

Can you please give me a concrete example showing what you want to achieve and what is not possible with the current design approach?

Philippe C.D. Robert

reply via email to

[Prev in Thread] Current Thread [Next in Thread]