[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gneuralnetwork] Fascinating neural network papers...

From: Ray Dillinger
Subject: [Gneuralnetwork] Fascinating neural network papers...
Date: Sun, 27 Nov 2016 10:45:34 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.4.0

Recently published papers I think are interesting and relevant.  If
anybody on the list would rather not be getting these notes about new
papers I think are relevant to artificial neural networks, let me know
and I'll stop.

Feedforward and feedback frequency-dependent interactions ....

This paper is about how different laminar levels and areas of the cortex
send each other signals and how frequency of signals helps to organize
the interaction.

These interactions in an artificial neural network of laminar structure
(based on biological neural networks in the cerebral cortex) reproduce
and to some extent explain phenomena seen in primate brains.
Understanding these processes is, as the author says, 'key to
understanding attentional processes, predictive coding, executive
control, and a gamut of other brain functions.'  Most of which functions
we are also interested in in artificial neural networks.

The ability to make, modify, and investigate this kind of model using
free software, incidentally, is one of the major reasons why I'm here
working on this project - and the main reason why I'm insisting on
supporting recurrent neural networks of arbitrary scale and structure.
I'm not going down to the level of modeling chemical interactions and
cell metabolism, because I think (hope) that level of detail isn't
necessary to understand how this all works in terms of information.
But, I'm interested in exploring the interactions, timings, and
topologies of networks having biologically-inspired structures and

You can't model biological networks with simple layered feedforward
models, because Mama Nature imposes a brutally complex recurrence on
everything more complicated than a flatworm, and doesn't give a flip
about keeping the model simple and organized into nice discrete layers.

Synchronization in networks with multiple interaction layers

This is an article about what I call "flow networks" - though some other
researcher somewhere has probably given them a more widely-accepted
name.  In these networks each new set of input leaves the network in a
different state so that it will have different responses to the next
input - but the network's own state rarely or never influences what it's
given for its next input.  They're essentially recurrent networks with
the feedback loops cut, and whenever they reach "interesting" levels of
complexity they tend to become unstable. This paper is about
synchronization mechanisms - how these networks can adapt in ways so
that their next response (to whatever input they get) tends to bring
them back toward a 'stable' state.  IOW, attempting to adapt to a
chaotic system rather than directly controlling it.  Or, seen another
way, about how and where to add the needed feedback loops which are
normally cut in a flow network.


Attachment: signature.asc
Description: OpenPGP digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]