swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Swarm-Modelling] topology tool


From: Scott Christley
Subject: [Swarm-Modelling] topology tool
Date: Sat, 31 Jan 2004 03:06:10 -0500

On Friday, January 30, 2004, at 08:55 AM, Darren Schreiber wrote:

There might be a topology tool that would allow you to shift your agents from interacting on moore neighborhood, to a von-neuman, to a hex-grid, to a 3D grid, to a random network, to a soup, to a small-works network, etc.

Let me expound upon this specific topic because it has been flirting in and out my consciousness for some time now; please bear with me as I get theoretical for a moment before jumping into implementation.

For your word of topology, I will use the word structure, either is fine but I like the mechanistic, concrete visualization that structure produces in my brain. And while you loosely mention interacting agents, I will use the broad term of action which includes everything that one might associate with a process or an act of doing something. The physical analogy for structure is space, time, and mass; while the physical analogy for action are forces be them gravity, electrical/magnetic, quantum, etc.

So I make the following claims without proof:

1) Structure by itself tells us very little.
2) Action by itself tells us just as little.
3) Production(or understanding!) of anything useful requires a tight-coupling of structure and action. 4) The tight-coupling is neither obvious nor trivial (it is often what we are trying to discover!) yet it intricately determines what is produced.
5) The tight-coupling is itself structure.

Let me make the further claim that action is purely a conceptual ideal, and that action itself does not exist without some physical structure to support that existence. For example, the communication of an action requires some symbolic structure like an English sentence, or an ObjC method, or a mathematical equation.

So you might take the viewpoint that structure is akin to agent/system state and that action is the mechanism which alters and creates structure, and you would be right. You could also take the viewpoint that structure(as embodied in state) drives action to be performed, and you would also be right. The point is that structure and action are intricately interwoven and beware to those who try to separate them for here be demons.

Why do I make these statements, because it is my belief that you cannot consider structure and action as "components" as if you were writing an accounting system talking about purchase orders, invoices, and printing checks. If such was true then a single "interface" would suffice to generalize the interaction, but we are talking about broad, generic simulation of all types of phenomenon with mostly ill-defined models.

And the 42 million dollar question is, which came first, structure or action? ;-)

Okay, so now to implementation, if you take my theory at face value, the implication is that attempting to provide plug and play capabilities as Darren describes would require an infinite number of interfaces to connect up all the possible structures, combinations of structures, and ways actions can be represented in structures. The pragmatic approach is to say, well we have this finite set of interfaces that we know about, so let's program those and we can iteratively add more as need more.

This sounds fine until you reach the point where an inconsistency or conflict arises, and this will surely happen at some point; the result is the required change in some structure will need to percolate throughout the whole system of structures, something that is extremely hard to do with our current language implementations. A good example is discovering a conflict between method names (because say you are merging two models together) where you need to rename one of the methods, this entails going through all the code and changing the name everywhere the method is used. A more complex example is where you change space from a 2d grid to a 3d grid, that change needs to percolate through method calls to add an additional parameter, some classes need additional instance variables defined, some algorithms (like a distance metric calculation) will require even more substantial changes.

So my suggestion is that actions, as conceptual ideals, should be represented in the most general symbolic structure as possible, and when that action is needed in a more concrete setting that the structure to support that action (in that concrete setting) is generated dynamically by the simulation system. Likewise structures themselves will need to be represented with general symbolic structures.

For a specific example, take our favorite heatbugs application. Here is how I envision "writing" the application.

1) The central vision of the application is a heatbug, which is a structure, so we pull up are list of structures, pick the generic version and change its symbolic representation(i.e. name) to "heatbug".

2) Now we are thinking, what is a heatbug? Well it has a location which is a structure, it has movement which is an action, it has diffusion which is also an action, and it has unhappiness which is a structure. We again pull up lists for structures and actions, location and movement are ones already pre-defined, but diffusion and unhappiness are probably new ones. Now at this point we still have just a collection of symbols, no code, no interfaces.

3) Now we get more specific, we want to specify in more detail some structures like location so we pull down our list of structures, find a 2d discrete grid and attach its symbol to location. No coding of X and Y instance variables allowed here! Likewise we attach the symbol for a real value scalar to the unhappiness structure.

4) Now we move on to specify actions in more detail. We find the symbol for randomWalk action and attach it to our movement action. Diffusion is a bit more complex but we conceptually understand it as a scalar value with a location and some actions like diffuseHeat and increaseHeat.

and so on, hopefully you get the idea. We still haven't incorporated time or graphical displays as structures in our simulation yet!

Now when you want to run the simulation, the simulation system (insert miracle here) binds all of the structures together in a concrete representation, generates the code, and off you go.

I'm obviously leaving alot out of detail, and the point is not that there should be some visual drag/drop way to construct simulations. The point is that you need to input the "model" (i.e. a symbolic structural representation) and it is up to the system to implement that model into a concrete simulation.

cheers
Scott



reply via email to

[Prev in Thread] Current Thread [Next in Thread]