[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gneuralnetwork] Eventual Role for command-line gneuralnetwork.

From: Ray Dillinger
Subject: Re: [Gneuralnetwork] Eventual Role for command-line gneuralnetwork.
Date: Sun, 9 Oct 2016 21:14:21 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.2.0

On 10/09/2016 03:28 PM, David Mascharka wrote:

> are we after a neural network package or a general learning package (e.g. 
> what WEKA and other packages are trying to do)? 

Right now I'm focused mostly on neural networks - but the evolution of
neural networks by various types of genetic algorithm is going to be a
big part of this too, so there'll be a lot of GA code before we're done.
 GA, obviously, can be applied to other things besides neural networks.

> I'd like to take a look at the changes you've made to the configuration files 
> - could you point me to a link? 

I should be checking in parser and writer in a couple more days - and
for now it only handles the topology.  More stuff as I get it working,
of course.

> I'm a bit confused by what you mean with network topologies having high 
> Kolmogorov complexity. Could you give a brief example of a type of network 
> you mean? I'm having trouble thinking of a network that can't be described a 
> lot more simply than by listing its parameters and connections. For example, 
> the VGG architecture that's significantly easier to express than by listing 
> out its 139 million parameters. I could be misunderstanding your point here. 

That's a hyperNEAT system - very cool, and yes, the description as
multidimensional manifold equations is very much shorter than the
description as nodes and connections, which is awesome.  We'll need to
leverage that in our config files when we get around to supporting

Ultimately when you have a "simple" problem (for some definition of
simple) you get a "simple" network to solve it, regardless of the number
of nodes/connections.  The simple(r) way to express the structure of a
hyperNEAT evolved system as manifold equations mostly reflects the
complexity of the problem rather than the size of the network itself.

But some problems truly are horrifically complex, and will have a high
Kolmogorov complexity.  It is sheer hubris to imagine tackling some of
them (how complicated is it to be a hamster???).  But when we do they'll
have complexity that needs a terabyte to express - and that in its
simplest form, whatever that form may be.

But we probably won't be aware of that simplest form.  More often than
we deal with a truly intractable problem having genuinely high
Kolmogorov complexity, we'll probably be dealing with some problem whose
solution has at least one "simpler" description (low Kolmogorov
complexity) but we won't be aware of that method of describing it so
we'll wind up describing it in a way we know how - and that turns out to
be nodes and connections and weights.

In the worst case, we'll have a network with millions of nodes evolved
by GA working at the level of nodes.  And it will be a mess.  It may
approximate some elegant structure that we have no clue how to see or


Attachment: signature.asc
Description: OpenPGP digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]