gneuralnetwork
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gneuralnetwork] Draft of OpenMP parallelized Genetic Algorithm


From: Jean Michel Sellier
Subject: Re: [Gneuralnetwork] Draft of OpenMP parallelized Genetic Algorithm
Date: Mon, 4 Apr 2016 08:19:29 +0200

Hi Nan,

This is great! Thank you so much for being so fast! I will review your code and include it in the new release. In the meanwhile, I am in the process of parallelizing the other optimizers.

Concerning restructuring the code, actually one coder is helping me on that so it should make things easier later on. Thanks for commenting on it though!

Best,

JM


2016-04-04 4:31 GMT+02:00 Nan . <address@hidden>:
Hi JM,

please check the attached file, which is draft version of current GA.

most part of GA go to parallelized even quicksorting part.(which took me long time to finish it :-|)

the issue part is error calculation of training. currently we use a global NETWORK and a global array of NEURONs, we have to set input, feedforward, get output and then calculate the error, there is no wrong on serialized version, but on  parallelized version we have to make a big CRITICAL code, which make it go back to  serialization.

I tried another way to make a local copy of NETWORK and NEURONs, but these to components shared same internal id, which confused OpenMP. :P

we might need to change the design of NETWORK and NEURONs in future, or keep error calculation serialized (or big  CRITICAL code)

hope someone can improve the code.

Thanks in advance here.

BTW: if you compile code without -fopenmp, code keep same behavior as previous version.

Nan.




--

reply via email to

[Prev in Thread] Current Thread [Next in Thread]