gnu3dkit-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu3dkit-discuss] Re: NSOpenGL RenderKit


From: Philippe C . D . Robert
Subject: Re: [Gnu3dkit-discuss] Re: NSOpenGL RenderKit
Date: Thu, 24 Oct 2002 12:25:28 +0200

On Wednesday, October 23, 2002, at 05:10  Uhr, Brent Gulanowski wrote:
On Wednesday, October 23, 2002, at 06:44 AM, Philippe C.D. Robert wrote:

This is a very interesting issue. The GNU 3DKit was developed as a OpenGL based 3D framework only, although the idea to write it came from NeXT's 3DKit. Now my research interests have shifted a little and I am less interested in pure OpenGL based stuff. Up until recently I did not think of making the GNU 3DKit more generic, though. But then I started thinking about how the next gen 3DKit could be used to render a scene using different techniques (I am particularly interested in ray tracing, but other scanline based techniques would be interesting as well). My idea was to manage a scene in a scene database which could then be rendered by different renderers. Of course this needs much more thinking, but I now believe that it is worth it. This has the drawback, that a running version of it will be even more delayed though.

Is this really worth the effort, is anyone else interested in using such functionality?


I've wondered about this, too. I think the difficulty is that the more optimized the OpenGL renderer is, the more closely tied it is to the scene graph representation. (Actually, this is an area I'm weak on -- I understand the idea of resorting the polygons based on state, but not how this works nicely with sorting them based on visibility.) Given a tree organized spatially, multiple scan line renderers could be supported by providing a different method, and renaming the OpenGL render method to something like "renderToOGL". For a ray tracer, would the nodes still be involved directly in producing output? I can imagine a method that accepted a ray and returned it as reflected and refracted components.

In the design approach I have in mind the scene graph part of the 3DKit is decoupled from any concrete renderer - you therefore pass a renderer when performing an action on the scene (draw, cull, ... ) which knows by itself how to do the job. Thus its the responsibility of the 3DKit to provide developers with an API for writing such renderers w/o having to know about internals of the 3DKit.

I am currently working on this, I let you know as soon as I have something more concrete to show!

Do ray tracers use polygon approximations or mathematic descriptions for curved surfaces? In the latter case, the scene objects would be fundamentally different than the objects aimed at OpenGL or another realtime renderer. You could have scene objects that produced their own approximations when a scan line renderer was used, and used their mathematical descriptions (if they existed) when a ray tracer (or other offline renderer) was used. Is this too problematic? Would anyone want an abstract scene representation that could adapt itself to the renderer? The predictable solution is that you have a completely different set of node classes, because the overlap in functionality is too small. But it would be nice, for an artist/designer, to produce one scene specification that could be re-used with different renderers, and optimized itself for each of them.

A good scene graph should IMHO be able to manage data in any format - remember that a scene graph is 'only' a data structure. It's the renderer's responsibility to know how to perform specific actions on the graph's data. To help a renderer doing the job well the GNU 3DKit could still provide some methods to optimise a scene representation for a specific target, though.

-Phil
--
Philippe C.D. Robert
http://www.nice.ch/~phip





reply via email to

[Prev in Thread] Current Thread [Next in Thread]