swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] lifecycle requirements


From: Marcus G. Daniels
Subject: Re: [Swarm-Modelling] lifecycle requirements
Date: Sat, 25 Nov 2006 08:43:15 -0700
User-agent: Thunderbird 1.5.0.8 (Windows/20061025)

glen e. p. ropella wrote:
Originally, I
thought of defobj as a kind of reflection, which allows code to explore
an object or class and operate on what it finds.  But, I'd prefer it to
be more like a data-driven programming language, like a set of filters
that you arrange to make progressive modifications to an agent.
[..]

But, rather than have to explicitly characterize the entire hyperspace
_or_ pre-specify and name interfaces to which the object will sometimes
adhere, I'd rather just build a set of object operators, submit the
object to the operator (or have the object submit itself), and out pops
the new object without the result having to match any pre-determined _type_.
If the attributes and behaviors of an agent are a result of a sequence of filters, then the feature amounts to one of those books for children where one flap selects the head, another the torso, and another the legs. This maps naturally to fixed named sub-interfaces and has the advantage that lifecycle changes can remain typed and dynamic method dispatch isn't, in principle, necessary. Here, dynamic method dispatch provides nothing more than way to stall branch prediction in the CPU which is very bad. As a modern CPU can have ten stage pipeline (or more), stalling it can mean losing a factor of ten or more in performance. [Steve Railsback, et. al. wrote a comparison of agent based modeling toolkits, and Swarm did very poorly in some cases due to this when I looked at one of the cases with hardware profiling (Intel VTune).] Either member functions (e.g. typed messages), or inline conditionals will be far more efficient.

To put it another way, the existing hyperspace is implicitly small given the approach of `filters'. To make the lifecycle features actually do something interesting, one must consider a large hyperspace. In other words, mixing and matching at the DNA level, not at the organ or limb level. This notion of `filters', came from a historical implementation constraints. That is, that the `atomic' level is a method, and that (gee-wouldn't-it-be-great-if) methods could be assembled into classes in the fly. With interpreters (e.g. in the R statistical package) or modern just-in-time compiler systems (Tamarin/JavaScript, Java virtual machine, or .NET CLR), code at the operator level can be written or mutated on the fly. With runtime code generation, there is therefore the possibility to have DNA fragments of an agent really change behavioral microstructure.

So yes, code is data and data is code is a good thing. Swarm's phases are an approximation to the right thing that are a compromise between efficiency and generality. I just rationalize their presence in the Swarm codebase as further factoring of the interfaces, and largely benign, but not that useful for modeling purposes. But again, they *certainly do not* offer any efficiency gains, as you asserted some time ago, quite the contrary, in practice.

Marcus
reply via email to

[Prev in Thread] Current Thread [Next in Thread]