gneuralnetwork
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gneuralnetwork] Network structure change


From: Ray Dillinger
Subject: Re: [Gneuralnetwork] Network structure change
Date: Fri, 7 Oct 2016 12:11:02 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.2.0


On 10/06/2016 08:05 PM, David Mascharka wrote:

> I'm wondering if this is a fruitful direction to head for gneural-network.
> What I've got now doesn't have as many features as the gneural release but
> may be more reusable. What do you think is the best direction to head
> moving forward? I think the C++ version I've been working on allows for
> really nice extensibility/adding layers but if there are good reasons to
> work in C I'm on board with that.

I've recently checked in a new network structure that simplifies the
representation of neural networks a lot, and allows networks that are
recurrent and not necessarily layer structured.  There are no longer
limits on the number of nodes nor on the number of incoming connections
per node.

It's not fully working yet;  I've written the feedforward routine
for it, and a translation routine that converts networks in the
old format into networks in the new format, but I haven't integrated
it with all the rest of the machinery (training, input, output,
etc) yet.  Lots of work to be done, but it's the way forward. Like
your 'parallel direction' it represents a loss of current features
but has more future potential. By being much simpler it's also much
more flexible.

You should look at it if you're developing wrappers or interfaces.
It's really very simple; networks are simply a set of nodes and a
set of connections.  The connections have a 'source' node and a
'destination' node and are stored in the sequence in which they're
to be fired.  Internally there's no notion whatsoever of layers,
etc - that's all organizational stuff that people can use setting
a network definition up and understanding its structure, but
completely nonessential to the way the network is internally
represented.

Right now I'm writing a parser for a simpler configuration language,
whose statements  correspond to calls that you could wrap in an
interface class.  I'd really like to talk further about what ought
to be provided for in this configuration language.

As an example of defining network topology, I'm looking at something
like this.

-------

# Nodes 1-5 are input, 6-8 are hidden, 9 and 10 are input

CreateInput(5,None,Identity) # no input fn, output = input
CreateHidden(3,Add,Tanh) # input fn is add, output is tanh
CreateOutput(2,Add,Identity) # input fn is add, output = input

# Connections between ranges create connections from all
# the nodes in the first range to all the nodes in the second
# range. 'Randomize' is a keyword which can replace an
# explicit weight matrix giving the weights.

MakeConnection({1 5},{6 8},Randomize) # connect input to hidden
MakeConnection({6 8},{9 10},Randomize) # connect hidden to output


------

This creates a classic feedforward network with 5 input
nodes, 3 hidden nodes, and 2 output nodes, with all input
connected to all hidden and all hidden connected to both
output.

CreateInput, CreateHidden, CreateOutput all take the number
of nodes to create, the input function and the output or
activation function.  Node ID numbers are assigned to the
created nodes in sequence of creation.

MakeConnection takes source, destination, weights.  Source
and destination can be an integer node ID, or {int1 int2}
for a range of node IDs.  Weights can be a real number
(when making a single connection), a [list of weights]
when making multiple connections, or the keyword 'Randomize'
if you want to assign random initializations to the connections.

This format will also have writeback, meaning that when you
save a network, what you've saved is in the same syntax as the
configuration file and preserves all the same information -
except that weights (and connections, for topology-altering
methods of training) will be as-trained rather than as-
initialized.  With writeback there's no difference in code
path between reloading a savefile and loading a configuration
file.  The advantage of writeback is that you can open the
savefile, change the training strategy or parameters or
whatever, reload it and continue training without losing
any progress you've already made.


Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]