swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] lifecycle requirements


From: Marcus G. Daniels
Subject: Re: [Swarm-Modelling] lifecycle requirements
Date: Wed, 29 Nov 2006 23:49:54 -0700
User-agent: Thunderbird 1.5.0.8 (X11/20061107)

Scott Christley wrote:
This is often call the state explosion problem, and it becomes a problem when state change is more than changing the value of a variable but implies functional change as well. If you design a model where you explicitly handle all of those states, that means you need to explicitly write code for the function of the agent for each states. What if the number of states is exponential?
I think that's overstating it a bit. A larger space means more things are possible, but the interpreter of that space can still be simple. A codon is simple and stable in spite of the fact a larger genomes can code for more complex things. Similarly, the instruction set of a CPU is small and finite and relatively straightforward to get working compared to the virtually infinite number of programs than can be written with that set of opcodes.
This might be no big deal, but another complexity is that under different circumstances, the splicing process produces different RNA. So the basic idea that a single gene (DNA) produces a single Protein is not true, there are multiple proteins produced through alternative splices.
Ok,1) the notion of alternatives and 2) information about how to regulate the alternatives
In biology, a protein's function is defined by its structure but that structure is not fixed, enzymes and other molecules often change that structure, so a protein is considered to have different conformations.
and 3) context dependency
Now you are faced with a scenario that the hyperspace is so large that you cannot code in all of the possible interactions ahead of time.
But it's mainly about computer runtime not human time (e.g. too many coding details).
Well (one of) the key problem is going to be exactly how do two unknown protein 3 dimensional structures interact. If you take the naive approach you say, okay we will model the atoms of the two structures, calculate all the forces, and make some determination on whether they bind, etc etc. But you cannot use that approach when attempting to get your higher level patterns because it is too computationally complex for a system with lots of interacting proteins.
Some have been known to take a swing at this:

 http://www.t10.lanl.gov/kys
 http://mdgrape.gsc.riken.jp
What you need then is to do that naive approach once, then translate the result into a rule.
Probably more like a few hundred times per different perspectives and assumptions to know when that rule is really there and not just statistical noise, or some event correlated to another that is the real cause. :-)

However that rule has two parts, the protein X and Y is okay but what about the do something part. What if you don't know all the possible "do somethings" ahead of time? In a really cool evolutionary model you would not, this would be new functionality that was acquired.
A common objection to agent modeling is along the lines of "You know, I can think of N different ways you could get that result -- you found one using a liberal tolerance for error on your inputs." An evolutionary model may help you find more, but what do they have to do with the real world? E.g. how much error is involved in the conformational estimates of X and Y and how does that impact the extent to which that "something" means anything?
reply via email to

[Prev in Thread] Current Thread [Next in Thread]