swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] foundation of ABMs


From: Darren Schreiber
Subject: Re: [Swarm-Modelling] foundation of ABMs
Date: Tue, 5 Apr 2005 20:15:06 -0400


On Apr 5, 2005, at 7:12 PM, Joshua O'Madadhain wrote:

A couple of brief responses...

On 5 Apr 2005, at 13:11, Darren Schreiber wrote:

1) There are lots of different kinds of ways to evaluate a model. (A paper that I read from the engineering literature on validation catalogues 23, but there are many more, I'm sure).

2) There are many different reasons that you want to evaluate a model.

3) Items 1 & 2 are, or at least, should be, highly inter-related. You should choose the methods (note that I use the plural, because you probably want multiple methods) for evaluation (1) based upon your reasons for evaluating the model (2).

This is similar to the evaluation of models in the context of machine learning: in order to compare models' performance, you have to choose an evaluation function (often called an "error function" in this context)--and the choice of function is, or should be, based on what you want the evaluation to tell you.

Yes.

"Convergence to some solution" does not make sense for many of the problems that I am interested in as a political scientist. It looks like progress is being made in Iraq right now, but I wouldn't contend that this real world phenomena will "converge" or that there is "some solution." The social world, just isn't like that. And, there are deep problems with an ontology that constructs the world as having point solutions, equilibrium, etc. For instance, economics wanders into moral quagmires when it suggests that everything will reach equilibrium. Empirically, there are reasons to believe that this is not true. Normatively, lots of people may suffer while we wait for a social system to converge.

I saw an interesting talk on this by Brian Skryms recently on some work he's done with Robin Pemantle (a mathematician friend of mine). They gave an example of the stag hunt problem that can be demonstrated to converge mathematically. However, in extremely long time periods (millions and millions of iterations) the problem doesn't converge.

So what kind of conclusions would we draw from a mathematical convergence and a lack of computational convergence? For problems where people might suffer and die due to policy choices that are made based upon our models, this actually matters a lot.

If the model has been shown to converge mathematically, but a simulation of it doesn't converge if you iterate for long enough, then it seems quite likely to me that the problem is numerical instability, caused by roundoff error, rather than anything particularly mysterious or interesting.

This could be. But, it is certainly not inconceivable that a problem that converges analytically none-the-less would take a more than human scale amount of time to converge in the real world.

Furthermore, imagine a problem that fails to converge computationally because there is the kind of roundoff error you mention in the second decimal place and the programmer has only chosen to keep two decimal places. This would not be very interesting, I agree. But, if the programmer is using a specially designed computer that can accurately handle calculations to the hundredth decimal place and we still aren't getting the expected convergence, then we have to wonder whether our analytic results are sufficiently robust to inform our decision making.

"Rigor" means very different things to different people. I dare you to fly on a plane that has only been evaluated with analytic proof. Or, to take a drug that only passes the face validity test. Or, to forecast your return on investment using only historic data.

Unless I'm missing something, forecasts are either based on (models that are informed by) historic data, or on models that are constructed solely from intuition.

Well, current thinking on intuition in cognitive neuroscience (see Malcom Gladwell's book "Blink" for a general public version of this) is that intuition is a kind learning from the experience. In the expansive sense, all human knowledge is informed by models of historic data (typically models operating on wetware). But, using my ontology, I would label intuition as "theory." As a former lawyer, I borrow concepts from intellectual property law to define a model as a tangible manifestation of an idea. It's the thing you could copyright, trademark, patent, etc. I can't do any of those things with my intuition.

If you just go running about mindlessly running regressions on historic data to come up with an investment strategy, then you will be missing out on the real advantage of being a human rather than a computer. We do have intuition. We can formulate theories and generalize knowledge in ways that are both cognitively explicit and implicit.

Anyone who works in an empirical science has undoubtedly run across people who have mined the data and noticed a pattern. If this is the point where you hand your money over, then you are likely to be in trouble (for instance if they ran twenty a theoretic regressions and ask you to invest based upon the 0.05 statistically significant results from one of those regressions). If they use this result to then develop a theory, specify a model, and then test that model on some data that is out of the sample and get good results, this is the point where you might be doing well to invest. To quote any prospectus in compliance with the law, "past performance is not a guarantee of future results"

        Darren


Joshua O'Madadhain

address@hidden Per Obscurius...www.ics.uci.edu/~jmadden Joshua O'Madadhain: Information Scientist, Musician, and Philosopher-At-Tall It's that moment of dawning comprehension that I live for--Bill Watterson My opinions are too rational and insightful to be those of any organization.

_______________________________________________
Modelling mailing list
address@hidden
http://www.swarm.org/mailman/listinfo/modelling




reply via email to

[Prev in Thread] Current Thread [Next in Thread]