gneuralnetwork
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gneuralnetwork] Error calculations


From: Jean Michel Sellier
Subject: Re: [Gneuralnetwork] Error calculations
Date: Wed, 23 Mar 2016 09:52:14 +0100

Hi Tobias,

Many thanks for your comments!

Concerning your first point, yes you are correct. There is a bug in the function error() but this is being fixed while I am writing this email. It will probably be released Friday or Saturday. Unfortunately, your patch will not be useful since I am generalizing the whole routine to work for a general number of neurons in the input and output layers (previously the code was taking for granted that only one neuron was in the input layer and only one neuron in the output layer). Anyway, thank you so much for pointing the community towards this bug and for trying to fix it!

Concerning page 140 of the book, this is the part which discusses the backward propagation algorithm. I agree partially with you on this one. Let me explain why. This is a very efficient and very good algorithm when things are quite regular (in a mathematical sense) but it usually fails for real world applications where regularity is not an option. This is why, a few days ago, I sent a message to this community to discuss about Monte Carlo methods for optimization problems. Personally, I think this is where we could make a huge difference. In fact, not only these methods are known to be incredibly efficient and robust, they are also incredibly scalable (and we need it if we want to deal with "deep" learning). I checked and even Google tensorflow doesn't have it, so we could have a good point here ;)

Concerning pointers to functions, it would certainly make the code more elegant but also much more complex. Honestly, for now, we are still in the process of getting to version 1.0.0 so we have to keep things simple I guess. But this is just my opinion of course...

I hope this answers to your very interesting comments!!

Thanks again!

JM


2016-03-23 9:01 GMT+01:00 Tobias Wessels <address@hidden>:
Dear Gneural Network community,

I had a quick look at the method to calculate errors in Gneural Network
and I believe to have found a typo/mistake. I have attached a patch,
but would be happy if someone could review it, as I haven't done any
testing with the software (neither with nor without the patch).

Furthermore, as suggested, I am currently reading through the book
"Neural Networks for Pattern Recognition" (unfortunately I was busy, so
I didn't have much time to read). I am now at the chapter about error
back-propagation on page 140 and at that point I have decided to
compare the theory with the code. The method of calculating derivatives
in the current version is quite basic. In the book they propose a
somewhat more detailed calculation of errors (p. 144 has a summary
consisting of 4 simple steps) and it seems to me that this method is
more accurate and uses less computational power, since in the current
method the network needs to be evaluated several times. What do you
think?

Furthermore, what do you think about the idea that neurons should have
a function pointer to an activation function? It makes the code more
complex, but also more flexible and in my opinion, neutrons have
activation FUNCTIONS and not activation types as a property...

Kind regards,

Tobias



--

reply via email to

[Prev in Thread] Current Thread [Next in Thread]