swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] lifecycle requirements


From: glen e. p. ropella
Subject: Re: [Swarm-Modelling] lifecycle requirements
Date: Mon, 27 Nov 2006 09:38:01 -0800
User-agent: Thunderbird 1.5.0.7 (X11/20060927)

Maarten Sierhuis wrote:
> - To be able to represent the persistence of identity throughout any
> changes in constituents.
> - A loose binding between constituent independent "objects" in the real
> world.
> - An object can completely change its structure and behavior over some
> period of time _without_ having to change it's referent ID, _type_ or
> classification.

Thanks for the bullet list.

> One question that comes to mind, when I look at these requirements. Are
> you talking about a reflexivity? Meaning, that an agent can change
> itself (i.e. its makeup) and its behavior (i.e. its possible actions),
> but from a meta-level the object keeps its identity, meaning is of a
> certain "type" ?

No.  I'm talking about 2 layers, one of which any programming entity is
well-defined and the other of which any entity can be ill-defined.
Identity is preserved subjectively.  In the case of a cognitive agent,
identity is preserved through some reference to "self".  In the case of
an agent observing a phenomenon, identity is preserved as long as the
agent continues to observe that phenomenon, regardless of how it evolves.

> Another concept that comes to mind is autopoiesis. Are you talking about
> modeling agents as some sort of autopoietic system?

If we had the feature I'm after, then one _could_ implement an
autopoietic system.  But, I'm not after self-production, per se.  I'm
just after a modeling language that allows modelers to construct
machines solely through model-level expressions, rather than having to
puncture the model-level and reach down into the programming level in
order to construct the machine that reifies the model.

> If I am on the right track of what you mean, my next question would be
> for you to define specifically what you might want to change in an
> agent? For example, what do you mean with "change its structure and
> behavior over some period of time?"

An example might be an animal progressing from a healthy state to a
diseased state, where certain functions (exposed at the agent's API or
not) appear, go away, or change completely between the two states.  But,
rather than having to declare and define two types of agent:

   HealthyAgent h;
   DiseasedAgent d;

We'd have only one type of agent referred to as:

   id a;

without any further knowledge of the agent's internals.  We might ask it
to eat, suck on a thermometer, or whatever.  And if it couldn't, then we
would have to _discover_ what it could do and use that to achieve our ends.

> Here we might start to differ, based on the type of underlying (agent)
> language you use, and what you call an agent. For me, to separate
> objects from agents, an agent is by definition a BDI-type agent
> (otherwise, just talk about objects).
>
> In BDI-like agents, there are
> attributes of an agent, but no attribute-value pairs. Agents have
> first-order beliefs about the state of the world, which can be any
> triple (AgtOrObj.Atr=Val). If you represent an agent with
> attribute-value pair, you are merely representing agents as objects.
> Secondly, agents do not "call methods on other" agents. Agents can only
> communicate beliefs to other agents, or act in the world, which can be
> detected by other agents. This notion that agents call methods on other
> agents comes from object-oriented programming and does not allow for
> autonomous agents. It makes the agent paradigm be nothing new.

Well, I certainly don't imply complex cognitive structures when I say
"agent".  To me, an agent is "an object with the ability to schedule its
own actions."  That means that it could be intelligent or completely
reactive, as long as some of its behavior involves setting its own
agenda.  And it's a bit of a stretch to jump all the way from object to
cognitive agent.  There's plenty of grades in between.

Plus, it's fine for an object to have _zero_ exposed methods and
negotiate an interface.  So, interface negotiation is not sufficient (or
necessary) for "agency".  Having said that, though, I think on a
modeling level, interface negotiation is required for accurate models.

> In an agent language like Brahms, an agent does not have an ISA relation
> with its parent (its group), rather the agent has an IS_MEMBER_OF
> relation. This means that you do not model agents as being of some type,
> but rather as an agent belonging to some (social) group from which it
> inherits the ability to behave according to the social norms and actions
> of the group. This means that one could completely change the individual
> agent's behavior, without  breaking the fact that the agent has the
> IS_MEMBER_OF relation with the group. This, I think, allows modeling
> "object evolution" (as far as I understood it correctly) without
> changing the overall group membership relation of the agent.

I think it does _help_ with object evolution when/if your agents are
cognitive.  And you might distort the BDI paradigm to claim that very
low-level agents have a very low-level type of cognition (e.g. molecules
communicating their beliefs to other molecules).

But, it goes too far in the cognitive direction, I think.  Agents can
contain internal models of portions of their environment or other agents
_without_ being cognitive.  This type of "model" is in the sense of
information theory and shannon's 10th theorem.  In such systems, the
agents don't really communicate their BDIs and negotiate some
implementation-level mechanism for such.  Rather the models are
explicitly represented at the implementation level.

But, that doesn't mean that, say, a controlled system is completely
observable and controllable by the controller.  Hence, there's still a
modeling level that is "higher", more abstract, or ignorant of the
details of the system being controlled.  Much of this ignorance is
expressed through random variables or noise.  But, some internals are
solely indirect and have to be accounted for by the internal complexity
of the controller.

> So, to make a long story short. I am in the camp of believers that if
> you want to model "object evolution," you have to move away from using
> an object-oriented language as its modeling language. I think the Brahms
> language comes close to what Greg wants to model, although we haven't
> implemented all the reflexivity behavior to the language (we do have
> this on our list of things to do).

[grin]  I presume "Greg" is a token for me... whose actual identifier is
"Glen".  Thanks for the practical demonstration of the loose binding of
my layer (2)!!

As I said above, I don't think one has to assume BDI agents in order to
achieve the lexical fluidity I'm looking for that will allow object
evolution.  But, I will say that even if it goes too far, it's on the
right path.

-- 
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Never ascribe to malice that which is adequately explained by
incompetence.  -- attributed to Napolean Bonaparte


reply via email to

[Prev in Thread] Current Thread [Next in Thread]