gneuralnetwork
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gneuralnetwork] Monte Carlo: Importance sampling


From: Tobias Wessels
Subject: [Gneuralnetwork] Monte Carlo: Importance sampling
Date: Thu, 24 Mar 2016 10:40:36 +0700

As Monte Carlo methods are apparently the desired methods, I have read
the section in Bishop's book about that topic. It is very short and
just a summary of notions.

However, there is a formula which I don't understand. I would've
thought that it is a typo, but then I've searched for other resources
about Monte Carlo methods and I have found another paper [1] about
Monte Carlo methods and they have stated the same formula, so I'd
thought I better ask about it.

The formula in question is on page 426 formula (10.110). It is derived
from formula (10.109), which still makes sense to me, but then Bishop
assumes that p is not normalized. But how does that change the
formula?! The formula is correct as long as q is normalized! Even if we
assume that q should resemble p and, therefore, q is not normalized as
well, the normalizing factor should only involve q, i.e. in latex
notation the normalizing factor should be

$\sum_{i=1}^{L} q(w_i)$

instead of 

$\sum_{i=1}^{L} \tilde p(w_i)/q(w_i)$.

It would be nice if someone could clarify it. Anyway, I'm interested in
implementing (even the more sophisticated) Monte Carlo methods as soon
as I have enough knowledge about it.

Kind regards,
Tobias

[1] http://www.inference.phy.cam.ac.uk/mackay/BayesMC.html David McKay
has published quite some information about Monte Carlo methods and he
even has a free book on his website. The paper that I am reading at the
moment is the first under this link; "Introduction to Monte Carlo
methods". (erice.pdf)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]