Particle methods in Statistics

Posted in General, Statistics by Pierre Jacob on 30 June 2017
Welding it together

A statistician sampling from a posterior distribution with particle methods

Hi there,

In this post, just in time for the summer, I propose a reading list for people interested in discovering the fascinating world of particle methods, aka sequential Monte Carlo methods, and their use in statistics. I also take the opportunity to advertise the SMC workshop in Uppsala (30 Aug – 1 Sept), which features an amazing list of speakers, including my postdoctoral collaborator Jeremy Heng:

Particle filters initially appeared as computational tools to perform online state estimation (i.e. “filtering”) in non-linear, non-gaussian state space models, with applications in target tracking, signal processing, etc. They have been generalized to many other settings in statistics, and now constitute serious competitors to Markov chain Monte Carlo methods, under the name of “Sequential Monte Carlo methods”. These algorithms are often advantageous in terms of parallelization on modern hardware, benefit from a solid theoretical validation, allow the estimation of Bayes factors, and are indeed becoming more and more popular. Below is a tentative reading list on the topic.

Much more detailed reading lists can be found on

So… here we go.

Seminal papers

These three articles present the basics of particle filtering. Once you read them and implement the examples, you’re an expert in particle filtering. Congratulations.

  • Neil J. Gordon, David J. Salmond and Adrian FM Smith. “Novel approach to nonlinear/non-Gaussian Bayesian state estimation.” IEE Proceedings F (Radar and Signal Processing). Vol. 140. No. 2. IET Digital Library, 1993.
  •  Jun Liu and Rong Chen. “Sequential Monte Carlo methods for dynamic systems.” Journal of the American statistical association 93(443), 1032-1044, 1998.
  • Michael K. Pitt and Neil Shephard. “Filtering via simulation: Auxiliary particle filters”. Journal of the American statistical association, 94(446), 590-599, 1999.

From time series to generic sampling

The following two articles are instrumental: they take particle filters from their original setting and turn them into very general algorithms to obtain samples from probability distributions.

  • Nicolas Chopin. “A sequential particle filter method for static models.” Biometrika 89(3), 539-552, 2002.
  • Pierre Del Moral, Arnaud Doucet and Ajay Jasra. “Sequential Monte Carlo samplers.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68(3), 411-436, 2006.

The latter paper shows a significant advance in the theoretical understanding of particle methods since the seminal articles. And much more has been discovered since then!

Theoretical aspects

The literature on the theory of particle methods is now too vast to list, so I only list some relatively self-contained articles. The first one introduces operators acting on probability measures and establishes consistency of particle methods when the number of particles goes to infinity. The second one deals with stability properties of particle methods with respect to the time horizon. The third one introduces a set of weak assumptions to establish such stability properties.

  • Dan Crisan & Arnaud Doucet. “A survey of convergence results on particle filtering methods for practitioners”. IEEE Transactions on Signal Processing, 50(3), 736-746, 2002.
  • Frédéric Cérou, Pierre Del Moral, and Arnaud Guyader. “A nonasymptotic theorem for unnormalized Feynman–Kac particle models.” Annales de l’IHP Probabilités et statistiques. 47(3): 629-649, 2011
  • Nick Whiteley. Stability properties of some particle filters. The Annals of Applied Probability, 23(6), 2500-2537, 2013.

The reference book on the theoretical foundations of particle methods is:

  • Pierre Del Moral. “Feynman-Kac Formulae”. Springer New York, 2004.

Some methodological developments

The following articles present original and recent developments, and also give a glimpse of the variety of situations where particle methods have found to be useful beyond filtering.

  • Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. “Particle Markov chain Monte Carlo methods.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72(3), 269-342, 2010.
  • Fredrik Lindsten, Michael Jordan and Thomas B. Schön, “Particle Gibbs with Ancestor Sampling”, Journal of Machine Learning Research, 15, 2145-2184, 2014.
  • Liangliang Wang, Alexandre Bouchard-Côté & Arnaud Doucet. “Bayesian Phylogenetic Inference Using a Combinatorial Sequential Monte Carlo Method.” Journal of the American statistical association 110(512), 1362-1374, 2015.
  • Patrick Rebeschini, and Ramon Van Handel. “Can local particle filters beat the curse of dimensionality?” The Annals of Applied Probability, 25(5), 2809-2866, 2015.


Particle methods propagate a large number of particles over relatively few steps, which makes them easy to parallelize. In fact, particles are a large collection of interacting, non-homogeneous Markov chains. Because of the interactions, efficient implementation of particle methods on parallel hardware has received some attention:

  • Anthony Lee, Christopher Yau, Michael B. Giles, Arnaud Doucet, and Christopher C. Holmes. “On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods.” Journal of computational and graphical statistics 19(4), 769-789, 2010.
  • Lawrence M. Murray, Anthony Lee, and Pierre E. Jacob. “Parallel resampling in the particle filter.” Journal of Computational and Graphical Statistics 25(3), 789-805, 2016.
  • Nick Whiteley, Anthony Lee, and Kari Heine. “On the role of interaction in sequential Monte Carlo algorithms.” Bernoulli 22(1), 494-529, 2016.

Particle methods for model comparison

Particle methods can be used for evidence estimation, i.e. to estimate Bayes factors. This can be a good reason to use them, instead of standard MCMC methods. Indeed, there are currently no satisfactory ways of retrieving the evidence from the output of an MCMC run. This advantage of SMC was already noted in the articles listed above under “from time series to generic sampling”, i.e. since 2002. More has been said since then, e.g. in:

  • Luke Bornn, Arnaud Doucet, and Raphael Gottardo. “An efficient computational approach for prior sensitivity analysis and cross‐validation.” Canadian Journal of Statistics 38(1): 47-64, 2010.
  • Zhou Yan, Adam M. Johansen, and John A.D. Aston. “Toward Automatic Model Comparison: An Adaptive Sequential Monte Carlo Approach.” Journal of Computational and Graphical Statistics 25(3): 701-726, 2016.

If you have suggestions for more articles, please use the comments!


4 Responses

Subscribe to comments with RSS.

  1. […] Source link […]

  2. […] Please comment on the article here: Statistics – Statisfaction […]

  3. Víctor Elvira said, on 30 June 2017 at 14:04

    Very nice compilation, Pierre!

  4. dendisuhubdy said, on 30 June 2017 at 18:25

    Reblogged this on The Secret Guild of Silicon Valley.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: