gneuralnetwork
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gneuralnetwork] Some bugs


From: Gergö Barany
Subject: [Gneuralnetwork] Some bugs
Date: Thu, 24 Mar 2016 21:38:15 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

Hi everyone! Here are a few bugs I found in gneuralnetwork 0.6.0.

1. At simulated_annealing.c:60, a value is divided by (mmax-1). The value of
mmax is constrained by the parser to be positive, but not further. In
particular, it may be 1, in which case this is a division by zero.
Possible fixes (a domain expert should decide):
- require mmax > 0 in the parser AND divide by mmax instead of (mmax-1) here
- require mmax > 1 in the parser
- require mmax >= 0 && mmax != 1 in the parser (which would be weird)

2. At feedforward.c:49, the call binom((i+j-1)/2, j) may be executed with
i = 1, j = 1. This is binom(0, 1) = fact(0) / (fact(1) * fact(-1)), which
tries to divide by 0 because fact(-1) = 0. I'm not familiar with the math
here, should the formula be changed to avoid calls to binom(n, k) in cases
where k > n?

(To have this code path executed, change the test case
tests/curve_fitting.input, setting neuron 5's discriminant function to
LEGENDRE.)

3. Despite using 64-bit numbers internally, the fact() function only works
up to n = 20 without overflowing. Sooner or later someone will want to raise
MAX_IN to 21 or more (from 16), in which case binom() will just return junk.
(So this is more a future bug.) Here's a better implementation of binom()
that completely avoids the use of fact():

inline int binom(int n,int k){
 double res=1;
 int i;
 for(i=1;i<=k;i++){
  res*=(double)(n+1-i)/i;
 }
 return(round(res));
}

This is based on
https://en.wikipedia.org/wiki/Binomial_coefficient#Multiplicative_formula
Someone who is good with floating-point stuff should probably look over it.

Setting neuron 5's discriminant function to LAGUERRE, I benchmarked the old
and new binom() implementations against each other; with this one, the test
runs through about 10% faster. (Simply using a lookup table should be faster
still.)

This implementation also fixes/hides the bug above because this implements a
generalized binomial coefficient that does not require k <= n.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]