swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] lifecycle requirements


From: Marcus G. Daniels
Subject: Re: [Swarm-Modelling] lifecycle requirements
Date: Sun, 26 Nov 2006 19:41:43 -0700
User-agent: Thunderbird 1.5.0.8 (Windows/20061025)

glen e. p. ropella wrote:
The core concept I'd like to be able to represent is the persistence of
identity throughout any changes in constituents.
A set of untyped objects will do this, e.g. a list...
Analogies might be
made to sand piles or whirlpools where the object is identifiable even
though its constituents are in flux.
..but that assumes that there is really a thing at all. Turbulence requires some forces acting on the water. It's not clear to me this is an instance of object evolution, although it could be called instance of collective evolution, very loosely speaking, that could be identified and named *once* it recurred enough times to see the pattern.
A more biological example might be
the regular replacement of cells in the body while preserving the larger
scale feature.
This one still seems like an untyped set of objects in an appropriate vicinity, just that the objects are changing over time -- some being removed and some being added.
I think a looser
binding would provide a more intuitive way of modeling these constituent
independent "objects" in the real world.
There is still a real distinction between situations involving membership and situations involving non-membership. The latter are subjective per a model and others are, I think it is safe to say, real. Take out the engine from a car, it won't go anywhere. The implicit ones I see as phenomena to track and measure, but not ones that should have integrated data structures for that thing (that may
More concretely, I want at least two layers:  1) the bound programming
layer and 2) a looser lexical layer.  The implementation for (1) would
be guided by practical requirements like the popularity of the available
languages and run-times.  But, layer (2) would be dynamically negotiated
by the agents in the system like Luc Steels' emergent lexicon.  In that
system, a group of robots are given the ability to "refer" (point to
objects in their environment) and play "language games".  A robot bumps
into another robot, points at an object in the environment, and makes a
random noise.  The other robot makes a noise in response.  If the noises
are the same, then that game is a success and that noise is correlated
with that object.  In this way, a stable lexicon emerges.
Why not have the second layer be a clean instance of the first?
What I mean by "object evolution" is that an object can completely
change its structure and behavior over some period of time _without_
having to change it's referent ID, _type_ or classification.  Such a
loosely coupled layer would allow this _if_ we could programatically
execute actions on that object through that layer.
Consider a baby that can crawl vs. a human that can walk, windsurf, ski, run, whatever. It's far, far easier to implement a model `baby' phase of life with a `crawl' method and `adult' phase of life with methods for the other mentioned forms of locomotion than it is to make a model that considers the physics of all the ways than the 600 muscles (N^600 combinations) of a human could coordinate to create any kind of locomotion (and as a function of objects in the environment -- e.g. driving a car with a joystick run by the tongue). Either one of these are consistent with your definition and they are completely different, both from a modeling point of view and as they relate to toolkit infrastructure. The former is trivial to implement, and the latter requires many details about the morphology of a person, muscule strengths, nerve feedback accuracy, physics of the environment, etc. etc.

We know that some form of this has been a requirement in complex
programs for quite some time because it's lead to things like interfaces
versus implementations and method polymorphism.  But, these technical
solutions all assume a programatic and tight binding.
Yes..

And none of these really facilitate
modeling because the low level languages being used force us to
explicitly consider every implementation-level construct that a token
might refer to rather than having the underlying implementation take
care of it invisibly.
What is the job of the man behind the curtain? What is an example of a low level language forcing us to consider every implementation-level construct? For example, in Swarm, I can already send a message from one agent type to a set of unknown types and hope for the best (or trivially arrange for these objects to swallow unknown messages). What else, if anything do you mean?
The first step down this road, it seems to me, is to invert the control.
 In most programming exercises, methods are called on objects.  That
external locus of control is what presses us to fully _determine_ what
object we're manipulating before hand.  E.g.  The programmer should know
whether or not object X responds to method Y before she writes code
calling method Y on object X.  And if X.Y() is invalid, we want the
system to notify the programmer.
I'm puzzled by this. As you know, this is not the case with Swarm. The norm in Swarm is to use messages that have no knowledge of their destination until they get there. This, by itself, already solves the control inversion problem _if_ it is considered interesting in some model. It's great that we have this feature, it's not great so many people tie themselves to it. Unfortunately, as I mentioned, it comes at significant efficiency cost and it is a growing and menacing one for Swarm with the advancement of CPU architectures. To avoid it, currently in Swarm the way to do this is to make static or inline C functions in implementation classes. For a more general solution, Swarm could be ported to Objective C++, then we'd have objects that could respond to typed member functions or to untyped messages. To me this is a judgment modelers can make and its pretty obvious when to make it.

Lately I've been weakening to the intuition that the CPU branch prediction problems caused by dynamic method dispatch are (after suitable optimizations) logically related to interesting things a model or system could do. That, in principle, it ought to be possible to make a high performance message dispatch system by watching simulation dynamics, and further that in a heterogeneous model (say some kind of social network) that places where it can't be optimized are interesting locales for study. Maybe I should elaborate on this to the NSA... ;-)

But seriously, I think there is a real possibility to make Swarm's message dispatch as fast or faster than precompiled member functions. Whether anything `deep' can be learned from these statistics I don't know. Along the lines of what Dynamo does.

http://www.cag.lcs.mit.edu/dynamorio/





reply via email to

[Prev in Thread] Current Thread [Next in Thread]