[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu3dkit-discuss] considering G3DGeometry

From: Brent Gulanowski
Subject: Re: [Gnu3dkit-discuss] considering G3DGeometry
Date: Sun, 16 Mar 2003 19:16:59 -0500

On Sunday, March 16, 2003, at 06:41  PM, Philippe C.D. Robert wrote:

Please elaborate. I have been thinking about this a lot and its seems worthy of investigation, but if you know any serious reasons that would make it unrealistic, I would appreciate if you could mention them. If I want to browse

If you create a new geometry class how do you make sure that every existing renderer can handle it? If you write let's say a 3D modeller, you would maybe want to use parametric surfaces together w/ an OGL renderer, but how would you turn the created surfaces let's say into equivalent implicit surfaces used by another, raytracing based > backend?

There are many questions like these which are not easy to answer.

Undoubtedly! These questions are, however, more in line with my skills and interests than, e.g.: implementing shader parsers and designing shaders. If it comes down to a mathematical solution, again I'll probably leave it up to someone else, but as for setting the priorities of the representation types, and synchronizing them, and writing the state management logic which determines which ones to use when, this intrigues me very much.

I don't think you can make sure that every renderer instance can have a visually identical (or even very similar) representation. It would perhaps be possible to generate an approximation, or use a default rep. Given a PDF, how does NSImage produce a TIFF or a JPEG or whatever? Math of some sort! It must involve some kind of dependency graph from one representation which stores the most overall information down to that which stores the least. Potentially with two manually produced reps (or one manual, the other generated and tweaked by user/artist), you might then have enough fundamental information to generate most other possible reps, given a library of translation algorithms and a support class to implement them and return the desired rep.

I have a lot more thinking to do. Sometimes the applications of 3D tech that I am interested in would use 3D objects merely as a means of communication. This is very different than using 3D to render photo-realistic images. I know that games are not your focus (maybe you don't even like them), but they are another example where representation has symbolic, not literal, purpose, no matter the current obsession with realism in 3D games. I am interested in 3D virtual toys, in 3D user interfaces, and the use of 3D for visualization of information that is not inherently graphical (like structural relationships).

In such applications, representations are much more fluid, and much more dependent upon the viewing environment. Like with a 2D graphical user interface, what is a cube in one place might look like a flower or a tree or a television or anything else in other places. For me, visual representation is merely a surface applied onto a structural system. A scene graph is an example of a very common data organization method, which can be used for much more than strictly spatial data. I am highly interested in finding structural similarities between 3D scenes and other information matrices (not algebra!).

Brent Gulanowski
Mac game development news and discussion

reply via email to

[Prev in Thread] Current Thread [Next in Thread]