swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] lifecycle requirements


From: Marcus G. Daniels
Subject: Re: [Swarm-Modelling] lifecycle requirements
Date: Mon, 27 Nov 2006 12:24:08 -0700
User-agent: Thunderbird 1.5.0.4 (Windows/20060516)

glen e. p. ropella wrote:
Marcus G. Daniels wrote:
glen e. p. ropella wrote:
The core concept I'd like to be able to represent is the persistence of
identity throughout any changes in constituents.
A set of untyped objects will do this, e.g. a list...

No, because you still have to call explicitly named methods over those
untyped objects in the list.  And you have to know that the container is
a _list_ and not, say, a bag or queue or whatever.
The Swarm collections library provides a uniform interface for different sorts of containers, using dynamic messages to each object in the container. Not sure what you mean by "explicitly named" here. You don't have to declare these methods, just conjure up a selector which may or may not be really implemented by the objects in the set. In C# 2 or Java 5 you could use `generics' for homogeneous sets, or rely on an abstract superclass to declare interfaces for methods all subclasses will implement. In C++ templates are available for high-performance specialization of a collection per its type. Providing a single interface to different container implementations is not a problem.
It's not the collective that I'm trying to get at, it's the post-facto
naming of the object that I'm after.  The collective gets together in
some pattern, an external agent perceives this collective and _labels_
the pattern as, say, a "whirlpool".  That external agent should then be
able to act upon that whirlpool without explicit knowledge of how to act
on any given constituent of the whirlpool.
I agree this is an interesting problem. I'm glad that is now clearly articulated.

[much deleted]
Same car, same door, same action, different lexicons.
This is case is a much simpler to implement, so I'm not so sure it is the best example. The post-factor naming may require a perceptual capability, a capability that integrates a range of data some of it dynamic and over a period of time. So the notion of a lexicon is not just a mapping between static facts, it has to be a mapping between a lexicon of named things to procedures for detecting them, say as a 0th approximation a hash table of `closures' over the appropriate contexts.

I'm puzzled by this. As you know, this is not the case with Swarm. The norm in Swarm is to use messages that have no knowledge of their
destination until they get there.
Actually, they have a great deal of knowledge of their destination.
They have to have a valid pointer to that destination or else the
program will crash with a seg fault.
An agent and a task sent to it can both be invented or accessed by other agents during a simulation, not just when they are designed and compiled. Messages, classes, and objects in Objective C are just stuff to pick up and use independently at runtime. But this whole notion of "is the language sufficiently dynamic?" is bogus provided a statically typed language provides a reflection capability. E.g. in Java/Swarm we simply have a Selector class that uses Java reflection to find the methods that are needed, and then in the Swarm scheduler & probe mechanisms there exists ability to call or interrogate the things found by reflection. I think it's really just a question of how easy it is, in practice, to do dynamic things when the need arises.
And even if one believes that a good modeler must be a good
programmer, the two tasks (modeling and programming) are different
tasks.  And I think it's useful to consider the modeling problems
separately from the programming problems when discussing requirements.
Agent modeling is concerned with synthesis of mechanisms to reproduce phenomena that can't be understood by independent study of components. But computer systems are large and complex today, and further can display chaos themselves, so that many experienced users of computers, at least in some situations, don't really understand what happens in them any better than scientists do for `real world' phenomena. For example, the network dynamics in a complex supercomputing application vs. a traffic jam on the highway. So I'd argue in practice modeling and the process of making computers be more useful can often be similar activities..


reply via email to

[Prev in Thread] Current Thread [Next in Thread]