gneuralnetwork
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gneuralnetwork] training process and optimizers


From: Jean Michel Sellier
Subject: [Gneuralnetwork] training process and optimizers
Date: Sat, 19 Mar 2016 16:34:55 +0100

Hello All,

The current version of Gneural Network (0.5.0) implements 4 different types of optimizers for the training process, i.e.:

- a simulated annealing method,
- a gradient descent method,
- a simple random search approach,
- a genetic algorithm.

I know that some of you guys in this community are experts in optimization problems and I am looking for your help (although anyone with an idea is encouraged to participate to this conversation).

As far as I can see, it seems that the simulated annealing approach is the most efficient (at least for the tests I've been trying in the last few months). Now, I am looking for something different which could outperform this approach in terms of computational resources and, MOST OF ALL, in terms of parallelization. In particular, I have in mind Monte Carlo algorithms.

Does any of you guys have any experience with such optimizers? Any advice that could help me to code something better in this respect? Do you have a piece of code you would like to share/develop to enhance the training process in Gneural Network?

MANY thanks in advance!

JM

--

reply via email to

[Prev in Thread] Current Thread [Next in Thread]