[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gneuralnetwork] I found a good idea today...

From: Ray Dillinger
Subject: [Gneuralnetwork] I found a good idea today...
Date: Fri, 18 Nov 2016 20:23:36 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.4.0

I read neurology papers, among other things, looking for good ideas to
use in artificial neural networks.

Today my ticker coughed up this article

about the interplay of feedforward and feedback systems in the visual
processing part of the brain of mammals (specifically macaque monkeys).

And I thought about it for a minute or three and then realized that the
state of the art in Artificial Neural Networks does convolutional neural
networks in a way that's not very efficient compared to the way the
brain does them.

The brain operates a spiking recurrent network and most artificial
neural networks are of the continuous-signal variety.  In small networks
continuous-signal is better because the real-valued AMOUNT of signal
carries additional information downstream.  Biological brains implement
spiking networks, where the amount of information carried downstream is
one bit: fired, or didn't. Brains save chemical energy by only firing a
small percent of their neurons each cycle.  This lets an enormous amount
of information lie latent in connections that aren't being used at a
given instant, ready for use the instant the network decides that
neuronal path is relevant.

With a moderately clever implementation, the same applies to spiking
ANNs. We can save the CPU time required to process 99% of the
connections in the network, making enormous networks tractable.

Today, I realized that convolutional neural networks (state of the art
for image recognition, audio processing, natural-language recognition,
etc - general sensory processing) have up to now been constructed in a
strictly feedforward fashion.  That means, we actually don't know at the
outset which sets of convolution kernels will be relevant, and we run
all of them, at great CPU expense.  The more things your network can
recognize, the greater the number of high-level convolution kernels it
has to have, so while there's a significant CPU cost in the basic
'shape-and-pattern' kernels at the very first couple of levels, that
starts to be a tiny fraction of the total used at higher levels - you
need at least one convolution pass, and probably a dozen or so in at
least one lower level, to recognize each individual category, so once
you have the sunk cost of the lowest-level processing paid, it scales
linearly with the number of categories recognized.

But if we make a convolutional network which is also recurrent, then
feedback signals can tell the lower levels of the network *which*
convolution kernels are relevant.  If the same network is also spiking,
we get to skip processing the rest entirely.  The result is that a
network that recognizes millions of different categories of image, can
be approximately the same CPU cost as something that recognizes a few
hundred.  The good news is limited because the memory commitment is
going to scale with the network size, regardless of how much CPU we can
save.  The knee of the curve where we start making significant savings
is going to be at least several hundred categories, but at about double
the amount where we reach the knee of the curve, we'll hit a point where
adding categories to the system's ability to recognize things is
essentially free in terms of CPU time.

I've been convinced for a long time that spiking recurrent neural
networks are an absolute necessity for general AI, because of the
potential CPU savings.  Today's realization is that this savings is even
applicable to input processing via convolutional subsystems. If we carry
that to the very lowest level of input processing by applying it to
convolutional networks, the CPU savings on input processing can be
absolutely enormous.

I hadn't previously thought of processing input that way, because I'd
been thinking of input as a strictly feedforward subsystem that supplies
inputs to the recurrent parts of a large recurrent ANN - but in
biological brains, it's all part of the same feedforward/feedback cycle.
 And it's that way for a good reason which applies to artificial neural
networks as well.

Convolutional, recurrent, and spiking networks are already on my list of
things that need to be implemented.  Today's epiphany means that there's
an enormous CPU savings for extremely large networks if they are all
three at once.  Nobody's made one that way yet.


Aside:  Man, is evolution awesome, or what?  It finds the most amazingly
good ways to do SO MANY things!

Attachment: signature.asc
Description: OpenPGP digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]