|From:||Jean Michel Sellier|
|Subject:||Re: [Gneuralnetwork] I found a good idea today...|
|Date:||Mon, 21 Nov 2016 11:10:25 +0100|
All of this sounds EXTREMELY exciting! Please keep us updated.
Thanks A LOT for your hard work on this project!
Happy Gneural Hacking! :)
> 2016-11-19 5:23 GMT+01:00 Ray Dillinger <address@hidden>:
>> Today my ticker coughed up this article
>> And I thought about it for a minute or three and then realized
>> that the state of the art in Artificial Neural Networks does
>> convolutional neural networks in a way that's not very
>> efficient compared to the way the brain does them.
On 11/19/2016 12:45 AM, Jean Michel Sellier wrote:
> My question is: would it be possible to implement this new approach in
> Gneural Network? In that case, we could write a paper about it and may be
> this could spread the word around this package. What do you think?
It definitely will, because all three of these interacting algorithmic
techniques are already in the to-do queue. I'm not going to stop until
I've done them all, including optimizing the implementation of spiking
networks, because I need all of them for my personal AI project.
Incidentally my personal AI project includes a BUNCH of other new things
that papers could be written about or patents filed on. I've been
sitting on them mainly because I'm not working at a company where I have
free access to a patent attorney right now and I'm not attached to a
research institution where publishing papers is simple and easy.
Also, because existing ANN frameworks are too limited, I don't really
know yet whether practice is as good as theory for them at scale. I
need gneuralnetwork drastically increased in capability before I can
fully test them all. Like the one I just realized yesterday will
probably work, many will probably work best at a scale far beyond what
existing ANN frameworks can handle.
The inadequacy of existing ANN frameworks is why I decided to build a
better one, and the still-in-its-infancy state of development allowing
growth in some of the peculiar directions I need are why I picked this
project as a place to do it.
Building new infrastructure support for recurrent, arbitrarily large
and arbitrarily structured networks is already underway. When that's
done, (non-optimized) spiking networks are simple scripting
configuration - just use the unsigned step function for activation.
Optimized-for-spiking implementations of a few key routines will be
relatively easy (a few weeks of effort) once the basic infrastructure is
done. They'll use the same basic infrastructure and they'll be invisible
to the scripting language; they'll just kick in whenever someone makes a
network that uses the step function for everything. But they're
necessary too because without them it would be impossible to train a
network at very large scale within a reasonable timeframe.
Fully implementing convolution kernels will be a *lot* more work than
the spiking-optimized routines, and will require major additions to the
scripting language. That will take at least months, even once everything
else is working. But that effort is a given anyway, because first of
all I need it myself, and second, NOT implementing it would be stupid
for a general neural-network tool. That's a basic utility that a *LOT*
of people will want if they're building non-trivial stuff.
ALWAYS keep scale in mind. To make convolutional neural networks you
need something that can handle at least a few hundred nodes - possibly
up to tens of thousands if you're building something with many basic
recognition categories. The basic infrastructure I'm currently building
has to be in place before we can consider that. For my own project I'm
going to need to scale to possibly as many as a few million nodes and
hundreds of millions of connections. I shan't be able to do that
without optimized spiking networks.
|[Prev in Thread]||Current Thread||[Next in Thread]|