swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm Modelling] Re: The "Art" of Modeling


From: Jason Alexander
Subject: Re: [Swarm Modelling] Re: The "Art" of Modeling
Date: Sun, 16 Feb 2003 11:09:38 +0000

Breaking the phenomena into the smallest pieces possible has tremendous advantages. Parsimony is obvious. If we can explain a lot with a little, we have a great model.

Writing from the point of view of a philosopher of science (don't shoot me) two questions which I'd like answers to, and which I don't see discussed in your message, are (1) what you mean by "explain" and (2) what criteria must be satisfied for an purported explanation to be considered a "good" or "adequate" explanation.

Although you write that a "great model" can "explain a lot with a little," this statement still needs further explication since it is perfectly compatible with both instrumentalism and realism. Let me think like an old-school philosopher for a minute by supposing that explanations have to satisfy something like Hempel's deductive-nomological model of explanation --- so an explanation consists of a set of general laws and initial conditions from which one can deduce the thing to be explained. Let's set aside for the moment the view that agent-based models provide a theory of explanation which is different in kind from this one because the burden of proof for that claim lies with the agent-based modeler; agent-based models consist of general laws (the various bits of code that specify the dynamics of the model, which may be simple or complex, but which nonetheless constrain and determine, either deterministically or indeterministically, the future state of the model) and initial conditions (the parameter space which we sweep through), so agent-based models satisfy the conditions of the DN-model of explanation. [Epstein has an article in, I think, Complexity, which talks about the connection between agent-based models and "old school" theories of explanation.]

Now, on the DN-model, the only difference between explanation and prediction is a temporal one. Explanations address phenomena which have already occurred and predictions address phenomena which have not yet occurred. In my experience, many alleged explanations of agent-based models, if they try to target real phenomena at all, only concern themselves with explanation and not prediction. That is, they try to show how particular phenomena can be reproduced (deduced) from a particularly simple model (a small set of laws or code). I'm thinking of the "Boids" model, among others.

If all that's required of a "good" model is that it be capable of reproducing phenomena (without any prediction), then we are well on the road to instrumentalism. All we care about is finding the simplest set of general laws -- which need not map onto any real causal processes in the world -- that serve to reproduce the actual phenomena. To give a crazy example of the kind philosophers are known for, suppose we want to explain a recurring social phenomena in which everyone paints their face blue. We can get an amazingly accurate reconstruction of the phenomena by using a model in which the only general law (bit of code) looks like

[actionGroup createActionForEach: agentList action: M(setFaceColorBlue)];

but this need not map onto or model any real causal process at all. Does this explain? Well, if all we require is that the phenomenon is reproduced, we'd have to answer "Yes." So we now can explain all social phenomena to arbitrary degrees of accuracy by following and extending the above approach! Thus, we'd explain individual behavior in market interactions by cooking up (by any means necessary) a model which reproduces it.

Let me address one objection. One might say that, even though we don't get an explanation of why people paint their face blue by writing

[actionGroup createActionForEach: agentList action: M(setFaceColorBlue)];

we *do* (in some sense) get an explanation of individual behavior in market interactions if we can reproduce it. This view seems mysterious to me - why should reproduction of a complex phenomenon by following method M produce an explanation when reproduction of a simple phenomenon by following method M does not? (To foreshadow, I think the reason why one might think this is that, for a complex phenomenon, the mere fact that you can reproduce it [in a model] means you've identified, in some sense, the underlying causal laws, mechanisms, or processes which really did serve to produce that phenomenon. This "complexity implies convergence to the truth" view requires an argument.)

Anyway, I take it that the above account of why people paint their face blue *doesn't* provide an explanation because it doesn't hook up the general laws (bits of code) with actual laws, mechanisms, or processes. Good models and simulations explain, therefore, insofar as they identify general laws and mechanisms (perhaps even to a first approximation) that really do exist. Given this, a good model should be able to predict (at least in sufficiently similar circumstances, for sufficiently short periods of time) future states of the system, at least to certain degrees of accuracy.

When you write

A parsimonious model may only explain some portion of the phenomena of interest, but my experience is that in the process of cutting out everything not absolutely essential, what remains is essential in the sense that it is the essence of the problem thus important for many other related problems.

although I agree that "cutting out everything not absolutely essential" implies that "what remains is essential," I don't see why a successful parsimonious model ("successful" here meaning "reproduces the phenomenon in question" and "parsimonious" meaning "minimal or sufficiently small set of general laws") will identify "the essence of the problem" in the sense of identifying even one general law, mechanism, or process that maps onto the real world. If successful parsimonious models generally *do* this, I suspect it's an artifact of a semi-deliberate process of selection on the modeler's behalf in setting up the model. In constructing the model, you've already ruled out from consideration models whose general laws don't fit into some underlying theoretical framework.

With the lessons on simplicity firmly in mind, I attended a talk by a weather scholar at UCLA. He described the hundreds of differential equations in his program and how dramatic the improvements over former attempts have been. This made me incredibly nervous. Hundreds of differential equations seemed to lead right into the problems of atheoretic uninterpretability that Achen warns about. In response, our weather expert said "our aim with this model is to save people's lives and get them out of the way of floods and disaster, not to 'understand' tornados."

The weather scholar seems to be advocating an instrumentalist view over a realist view. If the ultimate goal is saving lives, then one will use any "black box" model which offers the best prediction of the future, regardless of whether it helps us "understand" tornados (where "understanding" a tornado means that we have an accurate description of the general laws, mechanisms, and processes which serve to produce tornados).

But we then face the standard problem of instrumentalism: it seems that the only justification we can give for why a "black box" model should offer accurate predictions is that it employs, in some way, a description of the general laws, mechanisms, and processes which are really at work in the world. If so, then the best way in which to cook up the "black box" model is to just go out and try to identify the general laws, mechanisms, and processes that really exist, i.e., we get a call for realism.

How much understanding do we need? How much predictive power do we need? I think that a good modeling process looks back and forth from one goal to the other because advances in one area facilitate advances in the other.

I'm inclined to agree with this, provided that by a "good modeling process" you mean one that seeks to provide predictions of future phenomena and not reproductions of previously observed phenomena. As the blue-faced people example shows, the simplest ways of reproducing past observed phenomena may very well employ bogus laws that just redescribe what happened in a different language. Successful prediction at least gives us some reason to think that the general laws we've identified (incorporated into the model) map onto the world, at least to some degree.

If there isn't the requirement that a "good model" hook up to the world by employing correct general laws, then I see no reason for thinking that "advances" in reproducing phenomenon should lead to advances in prediction.

 A final thought is the importance of ambitions.  ...
If I was writing a model with a prisoner's dilemma at the core, I would parameterize it so that I could easily transform it into another game by just changing the payoff structure. I would also make all the agents have their own payoff matrices so that I could change to heterogeneous payoffs once I understood how homogeneity worked. Thus, in a later version, I might think that we are playing a battle of the sexes while you think we are in a prisoner's dilemma.

I agree with this. Although it is important to keep ambitions in check otherwise everything becomes parameterized and the model spirals out of control. (I.e., agents have to interact according to some dynamic. But why *that* dynamic? Presumably that could be parameterized as well but, in doing so, you are well under way to recreating a general-purpose agent-based modeling system within your particular agent-based model.)

Cheers,

Jason
--
Dr. J. McKenzie Alexander
Department of Philosophy, Logic and Scientific Method
London School of Economics and Political Science
Houghton Street, London WC2A 2AE



reply via email to

[Prev in Thread] Current Thread [Next in Thread]