swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] comparing models


From: Steve Railsback
Subject: Re: [Swarm-Modelling] comparing models
Date: Mon, 18 Aug 2003 08:54:03 -0700

Scott Christley wrote:
> 
> Anybody familiar with Robert Axelrod's cultural dissemination model?
> And the follow up paper by Axtell, Axelrod, Epstein, and Cohen which
> "docks" Sugarscape with Axelrod's model?  Even if you don't, that's
> fine, my questions are general enough.

By the way, the book these are in is an important read for anyone doing
ABMs.
 
> ...

> I've implemented Axelrod's model in Swarm, now I am going to implement
> the model again but with a fundamentally different algorithm(cultural
> dissemination rule) underneath.  Then I want to compare them to see if
> they are equivalent.

I am currently revising a book chapter on analyzing ecological ABMs, so
I should have something to say, but this is certainly an under-developed
field. In brief, what we're recommending is:

a. The most important question in comparing two versions of a model is
whether the two versions lead to the same conclusions about the system
you're modeling. 

b. Often, the best way to make this comparison is by identifying some
patterns that "capture the essence" of the system, then see which
versions of the model cause those patterns to emerge. In other words,
the 'weak equivalence' is the most important.

c. There are a number of potential pitfalls to statistical comparison,
some of which you've identified. The sample size is arbitrary- how many
times you run the model determines how "significant" differences are in
the distribution of results. The comparison can depend on parameter
values that may not be well defined. And the distribution of results you
get from replicate model runs is completely an artifact of how you use
random numbers in your model; so if you compare two algorithms that
differ in the degree to which their results depend on random numbers,
this difference could really affect the "significance" of differences in
results.

Some of these issues are discussed in a little paper whose title
(Getting "results"...) arose from a question Paul Johnson posted here
several years ago. You can download it here:
http://math.humboldt.edu/~simsys/Products.html
 
> Now statistics is not my strong suit, so I hope that somebody can give
> me some pointers or suggest some reading material.
> * In the docking paper, they mention two statistical tests: two-sided
> Mann-Whitney U statistic and the Kolmogorov-Smirnov (K-S) test.  Are
> there any others?  Any good books or papers that talk about these types
> of tests, pros-cons, underlying assumptions, etc?

What you need is just a basic statistics text book. I (being equally
ignorant of statistics) would just go to the library and look through
the statistics books (at the section on comparing distributions) until
you find one you can understand and that answers your questions. But I
would also run my analysis past somebody that knows statistics before
attempting to publish them.

Our experience has been that reviewers tend to be very picky about
statistical analysis of results from ABMs---apparently the whole idea of
"data" produced by a model makes them nervous, so they are even more
critical than usual.

And one solution we've had to use actually might be perfectly adequate
for you: just run each version of the model a bunch of times, then draw
histograms of the output, and compare them visually. This actually tells
you more, with less gobbledygook and fewer things for reviewers to snipe
at, than running statistics. Talk about how similar the shapes are, what
parts of the distributions are different, etc. It may be useful to also
throw in a K-S test etc.- perhaps to reinforce a point that is already
clear from the vision comparison (e.g., that the two distributions were
nearly identical or really different). 

Finally, according to my co-author Bret Harvey, who Knows All concerning
statistics, if you just want to compare the *means* of some model output
between two versions of a model, you use one-way ANOVAs followed by
pairwise comparisons using Bonferroni t
tests. An example is in the paper "Analysis of habitat selection
rules..." at the same web site. 
 
Steve R.

-- 
Lang Railsback & Assoc.
250 California Ave.
Arcata CA  USA 95521
707-822-0453; fax 822-1868


reply via email to

[Prev in Thread] Current Thread [Next in Thread]