gneuralnetwork
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gneuralnetwork] Bayesian models and Monte Carlo methods


From: Jean Michel Sellier
Subject: Re: [Gneuralnetwork] Bayesian models and Monte Carlo methods
Date: Thu, 24 Mar 2016 16:17:25 +0100

Hi Tobias,

Many thanks for this very interesting email. I think there is a misunderstanding here though. Monte Carlo methods are a VERY broad family of numerical methods which deal with A LOT of different problems (the ones you mention are a very small part of this huge family of methods). Let me report here an extract of a paper of mine where I explain what this family is about:

"The purpose of Monte Carlo methods is to approximate the solution of problems in computational mathematics by using
random processes for each such problem. These methods give statistical estimates for any linear functional of the solution by
performing random sampling of a certain random variable whose mathematical expectation is the desired functional [46].
Essentially, they reduce a given problem to approximate calculations of some mathematical expectation. They represent
a very powerful tool when it comes to solve problems in mathematics, physics and engineering where the deterministic
methods hopelessly break down. Indeed Monte Carlo methods do not require any additional regularity of the solution and it
is always possible to control the accuracy of this solution in terms of the probability error. Another important advantage in
using Monte Carlo methods consists in the fact that they are very efficient in dealing with large and very large computational
problems such as multi-dimensional integration, very large linear systems, partial integro-differential equations in highly
dimensional spaces, etc. Finally, these methods are efficient on parallel processors and parallel machines. Thus, it is not
surprising that these methods have rapidly found a wide range of applications in applied Science."

I hope this somehow clarifies what I meant by Monte Carlo methods for optimization problems. Essentially, what I am looking for is a method which exploits the generation of (independent) random numbers to solve an optimization problem. This would represent an important feature of Gneural Network since it would extremely easy to parallelize and therefore useful for the training of "deep" neural networks.

I hope this helps,

Best,

JM


2016-03-24 15:50 GMT+01:00 Tobias Wessels <address@hidden>:
Hi everyone,

First a short remark on my previous question and then, more
importantly, an issue which I have with the current implementation of
neural networks.

I have read more about Monte Carlo methods and my previous question is
answered by the (already cited) book of McKay. My misconception was
that
        $\sum_{i=1}^L \tilde q(w_i)$
would approximate  the total weight of $\tilde q$, which it doesn't,
because the $w_i$ are already drawn according to the distribution of
$q$.


Now to the more important issue:
As it seems to me, Monte Carlo methods are mainly used for models,
which use Bayesian inference, as in these models you have to calculate
probabilistic integrals frequently, which is a difficult task unless
the model is broken down to a very narrow class of distribution
functions (i.e. Gaussian). Monte Carlo methods are a tool to
approximate these integrals by simple (that is: computationally
inexpensive) means.

The issue that I have now with the current implementation of neural
networks is that it is not adapted to Bayesian inference models. In
these models, the networks are not trained to use a single, most likely
choice of weight vector as in the maximum likelihood method, but
instead these models consider a probability distribution for the
weights of the neurons and then calculates an average (expected value)
of the output, given a new input x. (This is the important step, where
Monte Carlo methods are used, because this expectation is basically an
integral over a very complex probability density). 

In the current implementation, however, the nodes have a single
parameter/array, which is set to the most likely choice, rather than
using a distribution of possible values for the weights. I don't see
how I would implement a Bayesian inference model using the current
code.

I hope that my explanations were somewhat clear and somewhat accurate,
as I am still new to this topic myself. So if you see it differently,
please help me understanding this topic.

Kind regards,

Tobias




--

reply via email to

[Prev in Thread] Current Thread [Next in Thread]