swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] lifecycle requirements


From: glen e. p. ropella
Subject: Re: [Swarm-Modelling] lifecycle requirements
Date: Mon, 27 Nov 2006 08:55:27 -0800
User-agent: Thunderbird 1.5.0.7 (X11/20060927)

Marcus G. Daniels wrote:
> glen e. p. ropella wrote:
>> The core concept I'd like to be able to represent is the persistence of
>> identity throughout any changes in constituents. 
>
> A set of untyped objects will do this, e.g. a list...

No, because you still have to call explicitly named methods over those
untyped objects in the list.  And you have to know that the container is
a _list_ and not, say, a bag or queue or whatever.

>> Analogies might be
>> made to sand piles or whirlpools where the object is identifiable even
>> though its constituents are in flux.  
>
> ..but that assumes that there is really a thing at all.  Turbulence
> requires some forces acting on the water.   It's not clear to me this is
> an instance of object evolution, although it could be called instance of
> collective evolution, very loosely speaking, that could be identified
> and named *once* it recurred enough times to see the pattern.

It's not the collective that I'm trying to get at, it's the post-facto
naming of the object that I'm after.  The collective gets together in
some pattern, an external agent perceives this collective and _labels_
the pattern as, say, a "whirlpool".  That external agent should then be
able to act upon that whirlpool without explicit knowledge of how to act
on any given constituent of the whirlpool.

>> A more biological example might be
>> the regular replacement of cells in the body while preserving the larger
>> scale feature.
>>   
> This one still seems like an untyped set of objects in an appropriate
> vicinity, just that the objects are changing over time -- some being
> removed and some being added.

When a "doctor" pokes into your belly or shines a light into your
eyeball, she's not interacting with the cells that make up the tissue.
She's interacting with the tissue.  The doctor interacts with these
large scale objects even after the cells in the object have been replaced.

That's _not_ (in reality) because there is this real _thing_ called an
"eyeball".  It's always just the cells and tissues, even though the
doctor perceives it as an "eyeball".  This is the same as in the case of
the whirlpool.

>> I think a looser
>> binding would provide a more intuitive way of modeling these constituent
>> independent "objects" in the real world.
>>   
> There is still a real distinction between situations involving
> membership and situations involving non-membership.   The latter are
> subjective per a model and others are, I think it is safe to say,
> real.   Take out the engine from a car, it won't go anywhere.   The
> implicit ones I see as phenomena to track and measure, but not ones that
> should have integrated data structures for that thing (that may

I'm not trying to make an ontological or scientific statement about
reality.  Sorry if it seems like I am.  I'm trying to make a statement
about what the agents (including the modeler) _know_ explicitly and
implicitly.

So, yes, mechanics know that if you remove the engine from the car, it
won't go.  But, my dog doesn't know that.  He just gets in the car and
expects it to go.  My dog doesn't need to know about the engine in order
to interact with the car.  Worse yet, if the dog has language, his token
for the car door (as in the "open the door!  open the door! open the
door!" look they get on their faces when you're about to go for a ride)
has little to do with the human's token for the door.  So, a dog may
refer to the door as X and I may refer to the door as Y; but, it's the
same door and the same car.  The dog refers to the car as A and I refer
to it as B.  And the dog may refer to "opening" something as "booga",
where I refer to it as "open".  Hence, the dog might say:

   [A->X booga];

but I'll say:

   [B->Y open];

Same car, same door, same action, different lexicons.

>> More concretely, I want at least two layers:  1) the bound programming
>> layer and 2) a looser lexical layer.  The implementation for (1) would
>> be guided by practical requirements like the popularity of the available
>> languages and run-times.  But, layer (2) would be dynamically negotiated
>> by the agents in the system like Luc Steels' emergent lexicon.  In that
>> system, a group of robots are given the ability to "refer" (point to
>> objects in their environment) and play "language games".  A robot bumps
>> into another robot, points at an object in the environment, and makes a
>> random noise.  The other robot makes a noise in response.  If the noises
>> are the same, then that game is a success and that noise is correlated
>> with that object.  In this way, a stable lexicon emerges.
>>   
> Why not have the second layer be a clean instance of the first?

Well, it seems to me that the first is a programming interface to the
machine and the latter is a modeling interface to the program.  They
seem to have different purposes, different use cases, and achieve
different things.

>> What I mean by "object evolution" is that an object can completely
>> change its structure and behavior over some period of time _without_
>> having to change it's referent ID, _type_ or classification.  Such a
>> loosely coupled layer would allow this _if_ we could programatically
>> execute actions on that object through that layer.
>>   
> Consider a baby that can crawl vs. a human that can walk, windsurf, ski,
> run, whatever.   It's far, far easier to implement a model `baby' phase
> of life with a `crawl' method and `adult' phase of life with methods for
> the other mentioned forms of locomotion than it is to make a model that
> considers the physics of all the ways than the 600 muscles (N^600
> combinations) of a human could coordinate to create any kind of
> locomotion (and as a function of objects in the environment -- e.g.
> driving a car with a joystick run by the tongue).    Either one of these
> are consistent with your definition and they are completely different,
> both from a modeling point of view and as they relate to toolkit
> infrastructure.    The former is trivial to implement, and the latter
> requires many details about the morphology of a person, muscule
> strengths, nerve feedback accuracy,  physics of the environment, etc. etc.

Again, your response seems to target ontological reality rather than an
intra-model representation of knowledge.  I'm trying to focus on how
modeling is done.  Granted, some modeling efforts are reductionist and
attempt to show how phenomena at one scale emerge from generators at a
lower scale.  And vice versa.

But, what I'm talking about is the ability to have dynamic lexicons so
that the modeler can realistically represent the different internal and
operative representations of the agents in the model without forcing
those agents to have a machine-level understanding of the other agents
in the model.

By "machine level", I mean things like an agent having to know the
_actual_ memory address of the agents with which it interacts.  Or an
agent having to know the programming name or signature of methods to
call on another agent in order to interact with it.  This includes
"spaces", as well.  As it is, ABM tool kits require an agent to know
that the "world" is at 0x80499c0 and, in order to "perceive" its
neighbors, the agent has to know the method name (and number of
arguments and return value) for getting such information from the space.

> What is the job of the man behind the curtain?   What is an example of a
> low level language forcing us to consider every implementation-level
> construct?    For example, in Swarm, I can already send a message from
> one agent type to a set of unknown types and hope for the best (or
> trivially arrange for these objects to swallow unknown messages).   What
> else, if anything do you mean?

I think I've explained that above.  But, just in case, I'll say it
again.  I agree that you can send explicit, objective-c messages (with
explicit method signatures) to explicitly identified regions of address
space.

What an agent _cannot_ do is send messages using its own names for those
methods to other agents using its own names for those agents.  If agent1
calls agent2 "bob", then agent1 should be able to send a message to
agent2 by something like:

  [bob pleasePassTheSalt];

rather than having to say:

  [agent2 pleasePassTheSalt];

The same is true of the method name.  Perhaps agent2 will only pass the
salt when you talk to him in French.

>> The first step down this road, it seems to me, is to invert the control.
>>  In most programming exercises, methods are called on objects.  That
>> external locus of control is what presses us to fully _determine_ what
>> object we're manipulating before hand.  E.g.  The programmer should know
>> whether or not object X responds to method Y before she writes code
>> calling method Y on object X.  And if X.Y() is invalid, we want the
>> system to notify the programmer.
>>   
> I'm puzzled by this.  As you know, this is not the case with Swarm.   
> The norm in Swarm is to use messages that have no knowledge of their
> destination until they get there.

Actually, they have a great deal of knowledge of their destination.
They have to have a valid pointer to that destination or else the
program will crash with a seg fault.  There are looser constraints like
the ability to send a message to null or the ability to send a message
that is declared and defined (even if that method isn't available to the
receiver object).

Contrast this with, say, human behavior.  I can be introduced to someone
at a party, forget their name, and actually _use_ a different name for
them without any serious confusion (though with plenty of embarrassment
when I find out that I was using the wrong name).  I can say:

   [tom pleasePassTheSalt];

and it will be understood by the ... ahem ... run-time as:

   [joe pleasePassTheSalt];


To see how this relates to object evolution, I can take a tee-ball level
baseball playing child out to the diamond and say:

   [tom batterUp];

And I can take that same _person_ 15 years later to the diamond and say
exactly the same thing but with different results.

> Lately I've been weakening to the intuition that the CPU branch
> prediction problems caused by dynamic method dispatch are (after
> suitable optimizations) logically related to interesting things a model 
> or system could do.  That, in principle, it ought to be possible to make
> a high performance message dispatch system by watching simulation
> dynamics, and further that in a heterogeneous model (say some kind of
> social network) that places where it can't be optimized are interesting
> locales for study.   Maybe I should elaborate on this to the NSA...  ;-)
> 
> But seriously, I think there is a real possibility to make Swarm's
> message dispatch as fast or faster than precompiled member functions.  
> Whether anything `deep' can be learned from these statistics I don't
> know.  Along the lines of what Dynamo does.

Again, though, these are programming issues, not modeling issues.  Even
if there's a direct relationship between efficiency of computation and
what is being modeled (or how it's being modeled), the purposes and
required expertise for understanding the two types of problems are
different.  And even if one believes that a good modeler must be a good
programmer, the two tasks (modeling and programming) are different
tasks.  And I think it's useful to consider the modeling problems
separately from the programming problems when discussing requirements.

-- 
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Always acknowledge a fault. This will throw those in authority off their
guard and give you an opportunity to commit more. -- Mark Twain


reply via email to

[Prev in Thread] Current Thread [Next in Thread]