swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm Modelling] Re: The "Art" of Modeling


From: Jason Alexander
Subject: Re: [Swarm Modelling] Re: The "Art" of Modeling
Date: Sun, 16 Feb 2003 18:44:13 +0000

Hi Chris -

Jason; could you clarify the following bit for me?  You write...

The weather scholar seems to be advocating an instrumentalist view over a realist view. If the ultimate goal is saving lives, then one will use
any "black box" model which offers the best prediction of the future,
regardless of whether it helps us "understand" tornados (where
"understanding" a tornado means that we have an accurate description of
the general laws, mechanisms, and processes which serve to produce
tornados).

But we then face the standard problem of instrumentalism: it seems that
the only justification we can give for why a "black box" model should
offer accurate predictions is that it employs, in some way, a
description of the general laws, mechanisms, and processes which are
really at work in the world. If so, then the best way in which to cook
up the "black box" model is to just go out and try to identify the 
general laws, mechanisms, and processes that really exist, i.e., we get
a call for realism.

Why is that automatically "the best way"?  It would seem that this, too, requires
an argument.

If a "black box" model can be used to make successful predictions, it must base those predictions on a causal relationship holding between variables - otherwise there's no way to account for the model's success at making predictions.

The examples you cite from Scientific American don't so much refute the claim as illustrate the importance of distinguishing between the task of (1) identifying general causal laws, often holding at some macro-level of description, and the task of (2) identifying general causal mechanisms, operating at some micro-level, which serve to account for the macro-level laws. Redox chemical equations (which are basically laws of a sort) are great examples of this. The statement of the equation, which can be found by careful experimentation, doesn't itself provide any clue on what the appropriate underlying mechanism is which accounts for the chemical equation (law).

Take the example of the GP project having created a circuit that outpredicts anything designed by humans, but "we don't understand it." Since it is a digital circuit, it takes a set of input variables, massages the information contained within them somehow, and then produces an output. If the circuit is better than chance at predicting future states of affairs, it has found some causal relationships that hold between the variables and used this to construct a law. Now, when we say we "don't understand it" there are (at least) two things we might not understand:

(1) The actual law encoded by the circuit, relating input variables to output variables,

or

(2) The underlying causal mechanism which accounts for the actual law encoded by circuit.

If we "don't understand" in the sense of (2), that's no big deal. When Kepler produced his laws enabling him to predict the motions of the planets, people didn't understand them in the sense of (2) because they didn't have gravitational theory yet. They were still general laws holding for a class of phenomena.

If we don't understand in the sense of (1), then we just haven't translated the general law found by the circuit into a form we can comprehend. We can presumably do this -- let Mathematica grind on the logic of the circuit and backtrack it into mathematical notation. We'll get something we can then look at and comprehend. But what if the expression runs for hundreds of pages of ugly mathematical notation? You might say at this point that we still "don't understand it." But it seems to me that we don't understand it in the sense of (2), not (1).

Either way, this method is compatible with realism: we are searching for general laws relating variables. The GP method just searches for laws by a different technique than what we are most familiar with.

Moreover, there are refutations available; for instance, the current
issue of Scientific American reports on a GP project that has created at least one circuit that outperforms (outpredicts) anything designed by humans--and that the humans as yet don't understand it.  Another article talks about using data mining to target pharmaceutical research in directions that are most likely to yield high-return medications.  The results have been tremendous cost savings and the improved targeting of research--but the researchers don't necessarily understand why their new targets are "better" than their old ones.  Aren't either one of those applications better black boxes than we could presently build using the realist approach?
--
Dr. J. McKenzie Alexander
Department of Philosophy, Logic and Scientific Method
London School of Economics and Political Science
Houghton Street, London WC2A 2AE



reply via email to

[Prev in Thread] Current Thread [Next in Thread]