swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] Re: [Repast-interest] Distributed RePast


From: gepr
Subject: Re: [Swarm-Modelling] Re: [Repast-interest] Distributed RePast
Date: Thu, 7 Aug 2003 14:45:12 -0700

Darren Schreiber writes:
 > Excuse the crosspost, but I think the discussion is useful to both  
 > communities.  Please post followups on the RePast list to avoid mess.

Hmmm.  I didn't see this come through on repast-interest...  Besides,
Swarm-Modelling is for general modeling discussion as opposed to 
toolkit specific discussion, anyway.  So, I'm posting back to this
list.

 > Furthermore, I would contend that there are some very good reasons to  
 > go this route methodologically (in addition to the geek/cool factor).   
 > In a paper, I've written on model evaluation (aka validation) I argue  
 > for a multimodal approach to model evaluation.  One down side is that  
 > some of these modes require a lot of computing time.  Running the model  
 > to fit it to empirical data sets, exploring the parameter space,  
 > robustness testing, GA and GP motivated iterations of the model, etc.

A shift in method is definitely warranted.  One project I'm working on
is having a very tough time communicating the difference between
"analytic" modeling and "synthetic" modeling precisely because of the
limited context in which "validation" is an unambiguous term.

Analytic models consist primarily of finding a minimal set of events
or processes that mimic the observed behavior in the referrent system.

Synthetic models consist primarily of tossing in a bunch of stuff,
whatever might work or seem to work, and selecting for configurations
that mimic the observed behavior of the referrent system.

Every modeling effort involves both types of models, of course; but,
the touchstones by which one judges the relevance and adequacy of
analytic models is very different from those used to judge synthetic
models.

So, a change (more accurately "an addition") in validation methods is
warranted.  Specifically, we need validation methods that work in
the context of selection as opposed to prescription.

 > My grand vision is a set of tools where you could pop your model in and  
 > the system would evaluate your model in a variety of conditions and for  
 > the purposes that you are trying to establish.  Since this is just  
 > rerunning the model again and again it is perfect for poor-man's  
 > parallelism (like Drone or some other type of scripting).  But it would  
 > be even better if it could be distributed.  Perhaps, all of us using  
 > RePast would someday be able to donate the spare cycles we have to  
 > model evaluation for others.

Good start.  The remaining problem that stands out in my mind is the
"evaluation under specified conditions and for pre-stated purposes to
which your model should be applied."  With language like this, you
have to be careful that your requirements for the grid don't push you
right back into the methodological space you're trying to
superset.... that of analytic models.

 > In my mind, this is the right way to approach the epistemological  
 > problems of agent-based modeling.  Formal models have their "proofs."   
 > And, stats has its 95% confidence interval.  The problem with ABMs  
 > (which applies just as much to formal models and stats when you think  
 > deeply about it) is that we can so easily vary our assumption set and  
 > the parameter space is so large that we  need a new epistemological  
 > foundation for truth claims.

This is true of all models, really.  Even in formal models, it is easy
to introduce new axioms that change the character of the space of 
things that can be said about the model.

ABM's have the same problem we have in dealing with natural systems in
that they require many iterations of refinement and critique (which is
why open-source is borderline *critical* rather than just a Good
Thing) and alot of process overhead... behavioral details of the
persons doing the modeling.  Just as in the natural sciences,
conclusions based on misinterpretations of the system or partial
understanding will usually be wrong conclusions.

The system you propose would go a long way to speeding up the
iterations (not just computationally, but it would also work to get
more eyes on your model).  If, every morning, some new modeling
whipper snapper submitted a model to be evaluated by my hardware (and
especially my software), I could become an even better mentor
(a.k.a. abusive elitist) than I can if I have to seek out models to
critique.

To go one step further, if we could get the models to help evaluate
each other by embedding them all in a common (but mostly universal)
ontology and making them work together or against one another, 
then we'd see a manifold increase in iterations.

 > see that a 95% CI isn't as obvious as a it should be.  There needs to  
 > be a much broader regime for model evaluation.

I couldn't agree more.

-- 
glen e. p. ropella              =><=                           Hail Eris!
H: 503.630.4505                              http://www.ropella.net/~gepr
M: 971.219.3846                               http://www.tempusdictum.com



reply via email to

[Prev in Thread] Current Thread [Next in Thread]