swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] lifecycle requirements


From: glen e. p. ropella
Subject: Re: [Swarm-Modelling] lifecycle requirements
Date: Sun, 26 Nov 2006 15:20:49 -0800
User-agent: Thunderbird 1.5.0.7 (X11/20060927)

Maarten Sierhuis wrote:
> Can you give a requirement definition of what "object evolution" should
> be like? Maybe this is my lack of understanding and it's clear to
> everyone else what this definition is.

Very cool.  I'll give more detail of what I'd like to see.  But,
ultimately, the goal of a discussion like this is to tease out a common
feature the community might like to see.  So, take everything I say as
just my particular slant.

The core concept I'd like to be able to represent is the persistence of
identity throughout any changes in constituents.  Analogies might be
made to sand piles or whirlpools where the object is identifiable even
though its constituents are in flux.  A more biological example might be
the regular replacement of cells in the body while preserving the larger
scale feature.

In programming, an object is indexed by an ID and tokens are _bound_ to
that ID.  The semantic binding is very tight.  This seems unnatural to
me.  And in the same way that we claim OOP provides a more intuitive way
to program than, say, just structured programming, I think a looser
binding would provide a more intuitive way of modeling these constituent
independent "objects" in the real world.

More concretely, I want at least two layers:  1) the bound programming
layer and 2) a looser lexical layer.  The implementation for (1) would
be guided by practical requirements like the popularity of the available
languages and run-times.  But, layer (2) would be dynamically negotiated
by the agents in the system like Luc Steels' emergent lexicon.  In that
system, a group of robots are given the ability to "refer" (point to
objects in their environment) and play "language games".  A robot bumps
into another robot, points at an object in the environment, and makes a
random noise.  The other robot makes a noise in response.  If the noises
are the same, then that game is a success and that noise is correlated
with that object.  In this way, a stable lexicon emerges.

Now, I wouldn't care if this emergence were required or if you started
with some "ontology", as long as the ontology were dynamic.

What I mean by "object evolution" is that an object can completely
change its structure and behavior over some period of time _without_
having to change it's referent ID, _type_ or classification.  Such a
loosely coupled layer would allow this _if_ we could programatically
execute actions on that object through that layer.

This would allow us to explicitly make modeling statements about, say,
entity X even though X is different things with different behaviors at
different times (or in different contexts).

We know that some form of this has been a requirement in complex
programs for quite some time because it's lead to things like interfaces
versus implementations and method polymorphism.  But, these technical
solutions all assume a programatic and tight binding.  And it leads to
explicit code for handling the results (testing for protocol adherence
or using the reflection methods).  And none of these really facilitate
modeling because the low level languages being used force us to
explicitly consider every implementation-level construct that a token
might refer to rather than having the underlying implementation take
care of it invisibly.

-------------------------

The first step down this road, it seems to me, is to invert the control.
 In most programming exercises, methods are called on objects.  That
external locus of control is what presses us to fully _determine_ what
object we're manipulating before hand.  E.g.  The programmer should know
whether or not object X responds to method Y before she writes code
calling method Y on object X.  And if X.Y() is invalid, we want the
system to notify the programmer.

If we invert this, then we can make knowledge purely local (a basic ABM
requirement) where only the object, itself, knows (for sure) which
methods it responds to.  (Further, if it's partly a BDI agent, then
perhaps it has more, fewer, or different methods than it _thinks_ it
has.)  Inverting the control would require the development of the second
layer and would allow the evolution of any object while retaining its
label/token.

If the above is clear enough and interesting enough, it would be
interesting to build a kind of pseudo-code language to show what
agent-agent interaction would look like at that second layer.

-- 
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Whoever fights monsters should see to it that in the process he does not
become a monster.   And when you look long into an abyss, the abyss also
looks into you.   -- Nietzsche, "Beyond Good and Evil"


reply via email to

[Prev in Thread] Current Thread [Next in Thread]