just a quick post to announce that particles now implements several of the smoothing algorithms introduced in our recent paper with Dang on the complexity of smoothing algorithms. Here is a plot that compares their running time for a given number of particles:

All these algorithms are based on FFBS (forward filtering backward smoothing). The first two are not new. O(N^2) FFBS is the classical FFBS algorithm, which as complexity O(N^2).

FFBS-reject uses (pure) rejection to choose the ancestors in the backward step. In our paper, we explain that the running time of FFBS-reject is random, and may have an infinite variance. Notice how big are the corresponding boxes, and the large number of outliers.

To alleviate this issue, we introduced two new FFBS algorithms; FFBS-hybrid tries to use rejection, but stops after N failed attempts (and then switch to the more extensive, exact method). FFBS-MCMC simply uses a (single) MCMC step.

Clearly, these two variants runs faster, but FFBS-MCMC wins the cake. This has to do with the inherent difficulty in implementing (efficiently) rejection sampling in Python. I will blog about that point later on (hopefully soon). Also, the running time of FFBS-MCMC is deterministic and is O(N).

That’s it. If you want to know more about these algorithms (and other smoothing algorithms), have a look at the paper. The script that generated the plot above is also available in particles (in folder papers/complexity_smoothing). This experiment was performed on the same model as the example as in the chapter on smoothing in the book, essentially. I should add that, in this experiment, the Monte Carlo variance of the output is essentially the same for the four algorithms (so comparing them only in terms of CPU time is fair).

]]>Previous versions of particles relied on a bit of Fortran code to produce QMC (quasi-Monte Carlo) points. This code was automatically compiled during the installation. This was working fine for most users, but not all, unfortunately.

The latest (1.7) version of Scipy includes a stats.qmc sub-module. Particles 0.3 relies on this sub-module to generate QMC points, and thus is a pure Python package. This should mean fewer headaches when installing particles. Please let me know if this new version is indeed easier to install for you. Of course, make sure you have updated Scipy before installing particles; e.g. `conda update scipy`

if you are using conda.

With Dang, we wrote a paper on a new class of SMC samplers, waste-free SMC; see this paper on arxiv (to be published soon in JRSSB). In particular, the new version describes a particular scenario where it is possible to show formally that waste-free SMC >> standard SMC (in the sense of lower asymptotic variance).

The module `smc_samplers`

now implements waste-free SMC by default (but standard SMC is still available, through option `wastefree=False`

). Check the following notebook to see how to run an SMC sampler in particles.

The new module `binary_smc`

implements SMC samplers for binary spaces, i.e. {0, 1}^d, following Chopin and Schäfer (2014).

The package now includes a folder called “papers”, which contains scripts that reproduce selected numerical experiments from previous papers:

- scripts in sub-folder
`binarySMC`

reproduce most of the numerical experiments from Schäfer and Chopin (2014). - a script in sub-folder
`wastefreeSMC`

reproduces the first numerical experiment of Dau & Chopin (2020) on logistic regression. (See Dang’s github repo for the other experiments.)

- Added a new resampling scheme, called killing (which may be traced to papers and work by Pierre del Moral).
- Added a tutorial notebook on how to define non-trivial state-space models.

If you want to try particles, the first thing to read is the notebook tutorials. Second thing is to read the documentation of the respective modules. If you are still lost, feel free to raise an issue on github (or send me an e-mail, but github issues are more practical).

]]>Hi all,

With Leah South from QUT we are organizing an online workshop on the topic of “Measuring the quality of MCMC output”. The event website is here with more info:

https://bayescomp-isba.github.io/measuringquality.html

This is part of ISBA BayesComp section’s efforts to organize activities while waiting for the next “big” in-person meeting, hopefully in 2023. The event benefits from the generous support of QUT Centre for Data Science. The event’s website will be regularly updated between now and the event in October 2021, with three live sessions:

- 11am-2pm UTC on Wednesday 6th October,
- 1pm-4pm UTC on Thursday 14th October,
- 3pm-6pm UTC on Friday 22nd October.

Registration is free but compulsory (form here) as we want to make sure the live sessions remain convivial and focused; hence the rather specific theme, but it’s an exciting topic with lots of very much open questions, which we hope will attract both practitioners and methodologists. Meanwhile some material will be available on the website to everyone, including video recordings of presentations, and posters, so that the workshop hopefully benefits the wider community.

If you have suggestions for this event, or would like to organize a similar event in the future, on another “BayesComp” topic, do not hesitate to get in touch. Our contact details are on the workshop’s website.

]]>This post is about estimating the parameter of a Bernoulli distribution from observations, in the “Dempster” or “Dempster–Shafer” way, which is a generalization of Bayesian inference. I’ll recall what this approach is about, and describe a Gibbs sampler to perform the computation. Intriguingly the associated Markov chain happens to be equivalent to the so-called “donkey walk” (not this one), as pointed out by Guanyang Wang and Persi Diaconis.

Denote the observations, or “coin flips”, by . The model stipulates that , where are independent Uniform(0,1) variables, and is the parameter to be estimated. That is, if some uniform lands below , which indeed occurs with probability , otherwise . We’ll call the uniform variables “auxiliary”, and denote by the counts of “0” and “1”, with .

In a Bayesian approach, we would specify a prior distribution on the parameter; for example a Beta prior would lead to a Beta posterior on . The auxiliary variables would play no role; apart perhaps in Approximate Bayesian Computation. In Dempster’s approach, we can avoid the specification of a prior, and instead, and “transfer” the randomness from the auxiliary variables to a distribution of subsets of parameters; see ref [1] below. Let’s see how this works.

Given observations , there are auxiliary variables that are compatible with the observations, in the sense that there exists some such that . And there are other configurations of that are not compatible. If we denote by the indices corresponding to an observed , and likewise for , we can see that there exists some “feasible” only when . In that case the feasible are in the interval . The following diagram illustrates this with .

How do we obtain the distribution of these sets , under the Uniform distribution of and conditioning on ? We could draw uniforms, sorted in increasing order, and report the interval between the -th and the -th values (Section 4 in [1]). But that would be no fun, so let us consider a Gibbs sampler instead (taken from [4]). We will sample the auxiliary variables uniformly, conditional upon , and we will proceed by sampling the variables indexed by given the variables indexed by , and vice versa. The joint distribution of all the variables has density proportional to

From this joint density we can work out the conditionals. We can then express the Gibbs updates in terms of the endpoints of the interval . Specifically, writing the endpoints at iteration as , the Gibbs sampler is equivalent to:

- Sampling .
- Sampling .

This is exactly the model of Buridan’s donkey in refs [2,3] below. The idea is that the donkey, being both hungry and thirsty but not being able to choose between the water and the hay, takes a step in either direction alternatively.

The donkey walk has been generalized to higher dimensions in [3], and in a sense our Gibbs sampler in [4] is also a generalization to higher dimensions… it’s not clear whether these two generalizations are the same or not. So I’ll leave that discussion for another day.

A few remarks to wrap up.

- It’s a feature of Dempster’s approach that it yields random subsets of parameters rather than singletons as standard Bayesian analysis. Dempster’s approach is a generalization of Bayes: if we specify a standard prior and apply “Dempster’s rule of combination” we retrieve standard Bayes.
- What do we do with these random intervals , once we obtain them? We can compute the proportion of them that intersects/is contained in a set of interest, for example the set , and these proportions are transformed into measures of agreement, disagreement or indeterminacy regarding the set of interest, as opposed to posterior probabilities in standard Bayes.
- Dempster’s estimates depend on the choice of sampling mechanism and associated auxiliary variables, which is topic of many discussions in that literature.
- In a previous post I described an equivalence between the sampling mechanism considered in [1] when there are more than two categories, and the Gumbel-max trick… it seems that the Dempster’s approach has various intriguing connections.

**References**:

- [1] Arthur P. Dempster, New Methods for Reasoning Towards Posterior Distributions Based on Smple Data, 1966. [link]
- [2] Jordan Stoyanov & Christo Pirinsky, Random motions, classes of ergodic Markov chains and beta distributions, 2000. [link]
- [3] Gérard Letac, Donkey walk and Dirichlet distributions, 2002. [link]
- [4] Pierre E Jacob, Ruobin Gong, Paul T. Edlefsen & Arthur P. Dempster, A Gibbs sampler for a class of random convex polytopes, 2021. [link]

This module implements various variance estimators that may be computed from a single run of an SMC algorithm, à la Chan and Lai (2013) and Lee and Whiteley (2018). For more details, see this notebook.

This module makes it easier to load the datasets included in the module. Here is a quick example:

from particles import datasets as dts

dataset = dts.Pima()

help(dataset) # basic info on dataset

help(dataset.preprocess) # how data was pre-processed

data = dataset.data # typically a numpy array

The library makes it possible to run several SMC algorithms in parallel, using the multiprocessing module. Hai-Dang Dau noticed there was some performance issue with the previous implementation (a few cores could stay idle) and fixed it.

While testing the new version, I noticed that function distinct_seeds (module utils), which, as the name suggests, generate distinct random seeds for the processes run in parallel, could be very slow in certain cases. I changed the way the seeds were generated to fix the problem (using stratified resampling). I will discuss this in more detail a separate blog post.

Development of this library is partly driven by interactions with users. For instance, the next version will have a more general MvNormal distribution (allowing for a covariance matrix that varies across particles), because one colleague got in touch and needed that feature.

So don’t be shy, if you don’t see how to do something with particles, please get in touch. It’s likely our interaction will help me to either improve the documentation or add new, useful features. Of course, I also welcome direct contributions (through pull requests)!

Otherwise, I have several ideas for future releases, but, for the next one, it is likely I will focus on the following two areas.

My priority #1 is to implement waste-free SMC in the package, following our recent paper with Dang. (Dang already has released his own implementation, which is built on top of particles, but, given that waste-free SMC seems to offer better performance than standard SMC samplers, it seems important to have it available in particles).

When this is done, I plan to add several important applications of SMC samplers, such as:

- the computation of orthant probabilities (Ridgway, 2014);
- variable selection (Schäfer and Chopin, 2012);
- ABC, perhaps using the rare-event approach of Prangle et al (2018).

I also plan to document SMC samplers a bit better.

Python libraries such as Tensorflow, Pytorch or JAX are all the rage in machine learning. They offer access to very fancy stuff, such as auto-differentation, and computation on the GPU.

I have started to play a bit with Pytorch, and even have a working implementation of a particle filter that runs entirely on the GPU. The idea is to make the core parts of particles completely independent of numpy. In that way, one may use Pytorch tensors to store the particles and their weights. This is really work in progress.

]]>When I’m asked by students whether they should accept some referee invitation (being it for a stat journal or a machine learning conference) I almost invariably say yes. I think that there is a lot to be learnt when refereeing papers and that this worth the time spent in the process. I’ll detail in this post why I think so.

First, this post is not about tips on *how* to write a referee report, but rather on *why*. It is instructive to consult tips on the *how*s, and good posts can be found out there. Note that some journals will also have specific guidelines.

Before diving into the benefits of refereeing, let me first say that a referee invitation can also be declined for many good reasons: in case of a conflict of interest (CoI), and/or if some of the authors are too close to you in some sense (although in some fields with a tiny community, this almost inevitably happens); if you do not feel qualified enough; or sometimes, if you feel you are qualified, but the refereeing task can seem overwhelming due to length or technicality of the paper; do not feel obliged to accept invitations from journals you do not know about, and of course ignore those coming from predatory journals or publishers (use this checklist). In any case, be conscious that it is ok to decline an invitation. Keep in mind that the associate editor in charge will very much appreciate pointers to alternative referee names.

Now, what are the benefits of refereeing? It is a legitimate question, given that refereeing work is usually time-consuming, done on a voluntary basis, without implying any direct or instant reward. So it is important to understand what you can gain out of it.

**Learning about editorial process**

In the early stages of an academic career, refereeing papers is an opportunity of learning by doing about the ins and outs of the editorial mechanism. You do not get the chance to practice replying to referee reports every other day when you are a student. But getting papers to review, you may also get to see replies by the authors, and reports from other referees (eg in revision rounds). This may help and build some habit about how you will get into action when your turn comes to reply to referees!

**Opening research interests**

We are usually asked to referee papers in our own area of expertise, but accepting to review papers slightly outside of one’s research interests can be rewarding. Be curious! There is a chance that reading submitted papers will trigger new research directions of yours. This happened to me at least twice: I have started to work on Bayesian deep learning after refereeing an ICLR paper dealing with the behaviour of neural networks in the infinitely wide limit; and (dis)proving a conjecture stated in a COLT submission stimulated a new line of research of mine on the sub-Gaussian property of random variables. Pay attention that in order to start working on such submitted papers in a legit way, you should ensure that they are also made available as preprints on some open repository like arxiv.

**Prompting new opportunities**

Refereeing papers surely increases your visibility. It is also a preliminary step before being associate editor. I’m AE for several stat journals, and managing papers is a task that I find enjoyable, with a social side that consists of writing referee invitation messages to colleagues. This helps connect or stay in touch with colleagues we do not have occasions to meet in conferences those days!

]]>Andras Fulop, Jeremy Heng (both ESSEC), and me (Nicolas Chopin, ENSAE, IPP) are currently advertising a post-doc position to work on developing SMC methods for challenging models found in Finance and Econometrics. If you are interested, click here for more details, and get in touch with us.

]]>Ever wanted to learn more about particle filters, sequential Monte Carlo, state-space/hidden Markov models, PMCMC (particle MCMC) , SMC samplers, and related topics?

In that case, you might want to check the following book from Omiros Papaspiliopoulos and I, which has just been released by Springer:

and which may be ordered from their web-site, or from your favourite book store.

The aim of the book is to cover the many facets of SMC: the algorithms, their practical uses in different areas, the underlying theory, how they may be implemented in practice, etc. Each chapter contains a “Python corner” which discusses the practical implementation of the covered methods in Python, a set of exercises, and bibliographical notes. Speaking of chapters, here is the table of contents:

- Introduction
- Introduction to state-space models
- Beyond state-space models
- Introduction to Markov processes
- Feynman-Kac models: definition, properties and recursions
- Finite state-spaces and hidden Markov models
- Linear-Gaussian state-space models
- Importance sampling
- Importance resampling
- Particle filtering
- Convergence and stability of particle filters
- Particle smoothing
- Sequential quasi-Monte Carlo
- Maximum likelihood estimation of state-space models
- Markov chain Monte Carlo
- Bayesian estimation of state-space models and particle MCMC
- SMC samplers
- SMC^2, sequential inference in state-space models
- Advanced topics and open problems

And here is one fancy plot taken from the book. (For some explanation, you will have to read it!)

A big thanks to all the colleagues who took the time to read draft versions and send feedback (see the introduction for a list of names). Also, don’t write books, folks. Seriously, it takes WAY too much time…

]]>Hi all,

This post is about a way of sampling from a Categorical distribution, which appears in Arthur Dempter‘s approach to inference as a generalization of Bayesian inference (see Figure 1 in “A Generalization of Bayesian Inference”, 1968), under the name “structure of the second kind”. It’s the starting point of my on-going work with Ruobin Gong and Paul Edlefsen, which I’ll write about on another day. This sampling mechanism turns out to be strictly equivalent to the “Gumbel-max” trick that got some attention in machine learning see e.g. this blog post by Francis Bach.

Let’s look at the figure above: the encompassing triangle is equivalent to the “simplex” with 3 vertices (K vertices more generally). Any point within the triangle is a convex combination of the vertices, where are non-negative “weights” summing to one, and where are the vertices. The weights are the “barycentric coordinates” of the point. Any point in the triangle induces a partition into K sets . Each “sub-simplex” can be obtained by considering the entire simplex and replacing vertex by . It has a volume equal to relative to the volume of the entire simplex. Can you see why? If not, it’s OK, great scientific endeavors require a certain degree of trust and optimism.

Since the volume of each is , if we sample a point uniformly within the encompassing simplex, it will land within with probability . In other words we can sample from a Categorical distribution with probabilities by sampling uniformly within the simplex, and by identifying which index k is such that the point lands in . This appears in various places in Arthur Dempster’s articles (see references below), because Categorical distributions provide a pedagogical setting for new methods of statistical inference, and because this sampling mechanism does not rely on any arbitrary ordering of the categories (contrarily to “inverse transform sampling”).

How does this relate to the Gumbel-max trick? One way of sampling uniformly within the simplex is to sample Exponentials(1) and to define weights . Furthermore, a point is within for a given , if and only if for all . The next figure illustrates such inequalities: the points with coordinates satisfying are under/above some line that originates from the vertex opposite the segment and goes through .

An Exponential(1) is also minus the logarithm of a Uniform(0,1). Putting all these pieces together, a Uniform point in the simplex is within if and only if, for all ,

.

Since is a Gumbel variable, the above mechanism is equivalent to where are independent Gumbel variables. It’s the Gumbel-max trick!

- It’s hard to trace back the first instance of this sampling mechanism, but it appears in various of Arthur Dempster’s articles, e.g. “New methods for reasoning towards posterior distributions based on sample data”, 1966, and it is discussed at length in “A class of random convex polytopes”, 1972.
- The connection occurred to me while reading Xi’an’s blog post, which points to this interesting article on Emil Gumbel, academic in Heidelberg up to his exile in 1932, “pioneer of modern data journalism” and active opponent to the nazis. Quoting from the article, “His fate was sealed when, at a speech in memory of the 700,000 who had perished of hunger in the winter of 1916/17, he remarked that a rutabaga would certainly be a better memorial than a scantily clad virgin with a palm frond”.
- The Gumbel-max trick is interesting for many reasons, it amounts to viewing sampling as an optimization program, it can be “relaxed” in various useful ways, etc. In Art Dempster’s work that sampling mechanism is appealing because of its invariance by relabeling of the categories (“category 2” is not between “category 1” and “category 3”). This matters when performing inference with Categorical distributions (i.e. with count data) using Art Dempster’s approach, because the estimation depends on the choice of sampling mechanism and not simply on the likelihood function.

Hi everyone,

This short post is just to point to a course on “Couplings and Monte Carlo”, available here https://sites.google.com/site/pierrejacob/cmclectures. Versions of the course were given in Université Paris-Dauphine in February 2020 (thanks Robin Ryder and Christian P. Robert), at the University of Bristol in March 2020 (thanks Anthony Lee) and at the University of Torino for the M.Sc. in Stochastics and Data Science in May 2020 (thanks Matteo Ruggiero). I am grateful to these colleagues and their institutions for supporting this course. The course website points to about 100 pages of lecture notes, and 16 videos are available on youtube. It is intended for advanced undergraduate students or graduate students, with some previous exposure to Monte Carlo methods. This is work in progress, and as I am hoping to develop the course over the coming years, feedback would be much welcome.

]]>