swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Swarm-Modelling] Re: [Repast-interest] Distributed RePast


From: Darren Schreiber
Subject: [Swarm-Modelling] Re: [Repast-interest] Distributed RePast
Date: Thu, 7 Aug 2003 10:56:48 -0700

Excuse the crosspost, but I think the discussion is useful to both communities. Please post followups on the RePast list to avoid mess.


I have been thinking along the same lines as this for a little while. UCLA's brain mapping group is moving towards a Grid setup and I think the technologies are going to be getting to the place where this shouldn't be too hard to accomplish. They've already got this cool toolkit called pipeline that allows you to take data from one program into another really easily in an automated manner. So I could imagine running simulations, then dumping the output into a stats package, and then dumping out the graphs all automatically.

Furthermore, I would contend that there are some very good reasons to go this route methodologically (in addition to the geek/cool factor). In a paper, I've written on model evaluation (aka validation) I argue for a multimodal approach to model evaluation. One down side is that some of these modes require a lot of computing time. Running the model to fit it to empirical data sets, exploring the parameter space, robustness testing, GA and GP motivated iterations of the model, etc.

My grand vision is a set of tools where you could pop your model in and the system would evaluate your model in a variety of conditions and for the purposes that you are trying to establish. Since this is just rerunning the model again and again it is perfect for poor-man's parallelism (like Drone or some other type of scripting). But it would be even better if it could be distributed. Perhaps, all of us using RePast would someday be able to donate the spare cycles we have to model evaluation for others.

In my mind, this is the right way to approach the epistemological problems of agent-based modeling. Formal models have their "proofs." And, stats has its 95% confidence interval. The problem with ABMs (which applies just as much to formal models and stats when you think deeply about it) is that we can so easily vary our assumption set and the parameter space is so large that we need a new epistemological foundation for truth claims.

In political science, Chris Achen has a nice paper where he argues for "Rule of Three" --- I should not buy your statistical model if you have more than three explanatory variables because the parameter space is too big to really understand the model. This is exacerbated when we think of the proliferation of statistical models one can apply to a data set Ordinary Least Squares, Logit, Probit, Maximum Likelihood Estimation, etc... Each of these is called for under different assumptions about the data and the problem, but when you can not only choose the EVs, but the model, and everything else, then it is easy to see that a 95% CI isn't as obvious as a it should be. There needs to be a much broader regime for model evaluation.

My contention is that Grid computing or something like it is justified for ABMs because we are ever more in a world where we risk generating non-robust models that do not satisfy validity testing, but being blind to these problems since the epistemological standards developed in the early 20th Century have been satisfied.

        Darren



On Thursday, August 7, 2003, at 09:51  AM, Stephen C. Upton wrote:

Max,

I'd be interested in collaborating. For a project I've been working on for the past several years, we do something similar using Condor (http://www.cs.wisc.edu/condor/) as our distributed computing mechanism, and it works with a couple of our simulations, 2 in java and 1 in pascal. Another idea might be to use the Globus stuff, all in java, and set up to do grid computing. And a final idea is possibly using something like JADE (http://jade.cselt.it/), which can activate agents on different machines. Obviously, all require some learning, and there are likely other possbilities, one you mention below. The advantage of the above software is that most take care of registering machines, handling security, and all of that other cr*p that's not very interesting! ;-)

steve

Max J Cantor wrote:

Has anyone considered the creation of a address@hidden like screen saver for
RePast?   Nothing to complex, something that just does runs in the
background and uploads the results back to a central server. I was just
comtemplating the idea because on college campuses like mine there are
tons of computers idling in labs.

Off the top of my head, I see a server daemon that sends out the model jar and batch files representing a single run of the main batch file to the
clients which run the model then upload the results.



Given java RMI, and some other technologies, this should not be incredibly
hard to implement, so is there any interest in usage or collaboration?

Max




-------------------------------------------------------
This SF.Net email sponsored by: Free pre-built ASP.NET sites including
Data Reports, E-commerce, Portals, and Forums are available now.
Download today and enter to win an XBOX or Visual Studio .NET.
http://aspnet.click-url.com/go/psa00100003ave/ direct;at.aspnet_072303_01/01
_______________________________________________
Repast-interest mailing list
address@hidden
https://lists.sourceforge.net/lists/listinfo/repast-interest






-------------------------------------------------------
This SF.Net email sponsored by: Free pre-built ASP.NET sites including
Data Reports, E-commerce, Portals, and Forums are available now.
Download today and enter to win an XBOX or Visual Studio .NET.
http://aspnet.click-url.com/go/psa00100003ave/ direct;at.aspnet_072303_01/01
_______________________________________________
Repast-interest mailing list
address@hidden
https://lists.sourceforge.net/lists/listinfo/repast-interest




reply via email to

[Prev in Thread] Current Thread [Next in Thread]