How to beat Monte Carlo (without QMC)

Say I want to approximate the integral based on evaluations of function . I could use plain old Monte Carlo: whose RMSE (root mean square error) is . Can I do better? That is, can I design an alternative estimator/algorithm, which performs evaluations and returns a random output, with a RMSE that converges quicker? SurprisinglyContinue reading “How to beat Monte Carlo (without QMC)”

New smoothing algorithms in particles

Hi, just a quick post to announce that particles now implements several of the smoothing algorithms introduced in our recent paper with Dang on the complexity of smoothing algorithms. Here is a plot that compares their running time for a given number of particles: All these algorithms are based on FFBS (forward filtering backward smoothing).Continue reading “New smoothing algorithms in particles”

particles 0.3: waste-free SMC, Fortran dependency removed, binary spaces

I have just released version 0.3 of particles (my Python Sequential Monte Carlo library). Here are the main changes: No more Fortran dependency Previous versions of particles relied on a bit of Fortran code to produce QMC (quasi-Monte Carlo) points. This code was automatically compiled during the installation. This was working fine for most users,Continue reading “particles 0.3: waste-free SMC, Fortran dependency removed, binary spaces”

Online workshop: Measuring the quality of MCMC output

Hi all, With Leah South from QUT we are organizing an online workshop on the topic of “Measuring the quality of MCMC output”. The event website is here with more info: https://bayescomp-isba.github.io/measuringquality.html This is part of ISBA BayesComp section’s efforts to organize activities while waiting for the next “big” in-person meeting, hopefully in 2023. TheContinue reading “Online workshop: Measuring the quality of MCMC output”

Dempster’s analysis and donkeys

This post is about estimating the parameter of a Bernoulli distribution from observations, in the “Dempster” or “Dempster–Shafer” way, which is a generalization of Bayesian inference. I’ll recall what this approach is about, and describe a Gibbs sampler to perform the computation. Intriguingly the associated Markov chain happens to be equivalent to the so-called “donkeyContinue reading “Dempster’s analysis and donkeys”

particles 0.2: what’s new, what’s next (your comments most welcome)

I have just released version 0.2 of my SMC python library, particles. I list below the main changes, and discuss some ideas for the future of the library. New module: variance_estimators This module implements various variance estimators that may be computed from a single run of an SMC algorithm, à la Chan and Lai (2013)Continue reading “particles 0.2: what’s new, what’s next (your comments most welcome)”

On the benefits of reviewing papers

When I’m asked by students whether they should accept some referee invitation (being it for a stat journal or a machine learning conference) I almost invariably say yes. I think that there is a lot to be learnt when refereeing papers and that this worth the time spent in the process. I’ll detail in thisContinue reading “On the benefits of reviewing papers”

Everything You Always Wanted to Know About SMC, but were afraid to ask

Ever wanted to learn more about particle filters, sequential Monte Carlo, state-space/hidden Markov models, PMCMC (particle MCMC) , SMC samplers, and related topics? In that case, you might want to check the following book from Omiros Papaspiliopoulos and I, which has just been released by Springer: and which may be ordered from their web-site, orContinue reading “Everything You Always Wanted to Know About SMC, but were afraid to ask”