Update on inference with Wasserstein distances
Hi again,
As described in an earlier post, Espen Bernton, Mathieu Gerber and Christian P. Robert and I are exploring Wasserstein distances for parameter inference in generative models. Generally, ABC and indirect inference are fun to play with, as they make the user think about useful distances between data sets (i.i.d. or not), which is sort of implicit in classical likelihoodbased approaches. Thinking about distances between data sets can be a helpful and healthy exercise, even if not always necessary for inference. Viewing data sets as empirical distributions leads to considering the Wasserstein distance, and we try to demonstrate in the paper that it leads to an appealing inferential toolbox.
In passing, the first author Espen Bernton will be visiting Marco Cuturi, Christian Robert, Nicolas Chopin and others in Paris from September to January; get in touch with him if you’re over there!
We have just updated the arXiv version of the paper, and the main modifications are as follows.
Unbiased MCMC with couplings
Hi,
With John O’Leary and Yves Atchadé , we have just arXived our work on removing the bias of MCMC estimators. Here I’ll explain what this bias is about, and the benefits of removing it.
(more…)
Particle methods in Statistics
Hi there,
In this post, just in time for the summer, I propose a reading list for people interested in discovering the fascinating world of particle methods, aka sequential Monte Carlo methods, and their use in statistics. I also take the opportunity to advertise the SMC workshop in Uppsala (30 Aug – 1 Sept), which features an amazing list of speakers, including my postdoctoral collaborator Jeremy Heng:
Likelihood calculation for the gandk distribution
Hello,
An example often used in the ABC literature is the gandk distribution (e.g. reference [1] below), which is defined through the inverse of its cumulative distribution function (cdf). It is easy to simulate from such distributions by drawing uniform variables and applying the inverse cdf to them. However, since there is no closedform formula for the probability density function (pdf) of the gandk distribution, the likelihood is often considered intractable. It has been noted in [2] that one can still numerically compute the pdf, by 1) numerically inverting the quantile function to get the cdf, and 2) numerically differentiating the cdf, using finite differences, for instance. As it happens, this is very easy to implement, and I coded up an R tutorial at:
github.com/pierrejacob/winference/blob/master/inst/tutorials/tutorial_gandk.pdf
for anyone interested. This is part of the winference package that goes with our tech report on ABC with the Wasserstein distance (joint work with Espen Bernton, Mathieu Gerber and Christian Robert, to be updated very soon!). This enables standard MCMC algorithms for the gandk example. It is also very easy to compute the likelihood for the multivariate extension of [3], since it only involves a fixed number of onedimensional numerical inversions and differentiations (as opposed to a multivariate inversion).
Surprisingly, most of the papers that present the gandk example do not compare their ABC approximations to the posterior; instead, they typically compare the proposed ABC approach to existing ones. Similarly, the socalled Ricker model is commonly used in the ABC literature, and its posterior can be tackled efficiently using particle MCMC methods; as well as the M/G/1 model, which can be tackled either with particle MCMC methods or with tailormade MCMC approaches such as [4].
These examples can still have great pedagogical value in ABC papers, but it would perhaps be nice to see more comparisons to the ground truth when it’s available; ground truth here being the actual posterior distribution.
 Fearnhead, P. and Prangle, D. (2012) Constructing summary statistics for approximate Bayesian computation: semiautomatic approximate Bayesian computation. Journal of the Royal Statistical Society: Series B, 74, 419–474.
 Rayner, G. D. and MacGillivray, H. L. (2002) Numerical maximum likelihood estimation for the gandk and generalized gandh distributions. Statistics and Computing, 12, 57–75.
 Drovandi, C. C. and Pettitt, A. N. (2011) Likelihoodfree Bayesian estimation of multivari ate quantile distributions. Computational Statistics & Data Analysis, 55, 2541–2556.
 Shestopaloff, A. Y. and Neal, R. M. (2014) On Bayesian inference for the M/G/1 queue with efficient MCMC sampling. arXiv preprint arXiv:1401.5548.
SubGaussian property for the Beta distribution (part 1)
With my friend Olivier Marchal (mathematician, not filmmaker, nor the cop), we have just arXived a note on the subGaussianity of the Beta and Dirichlet distributions.
The notion, introduced by JeanPierre Kahane, is as follows:
A random variable with finite mean is subGaussian if there is a positive number such that:
Such a constant is called a proxy variance, and we say that is subGaussian. If is subGaussian, one is usually interested in the optimal proxy variance:
Note that the variance always gives a lower bound on the optimal proxy variance: . In particular, when , is said to be strictly subGaussian.
The subGaussian property is closely related to the tails of the distribution. Intuitively, being subGaussian amounts to having tails lighter than a Gaussian. This is actually a characterization of the property. Let . Then:
That equivalence clearly implies exponential upper bounds for the tails of the distribution since a Gaussian satisfies
That can also be seen directly: for a subGaussian variable ,
The polynomial function is minimized on at , for which we obtain
.
In that sense, the subGaussian property of any compactly supported random variable comes for free since in that case the tails are obviously lighter than those of a Gaussian. A simple general proxy variance is given by Hoeffding’s lemma. Let be supported on with . Then for any ,
so is subGaussian.
Back to the Beta where , this shows the Beta is subGaussian. The question of finding the optimal proxy variance is a more challenging issue. In addition to characterizing the optimal proxy variance of the Beta distribution in the note, we provide the simple upper bound . It matches with Hoeffding’s bound for the extremal case , , where the Beta random variable concentrates on the twopoint set (and when Hoeffding’s bound is tight).
In getting the bound , we prove a recent conjecture made by Sam Elder in the context of Bayesian adaptive data analysis. I’ll say more about getting the optimal proxy variance in a next post soon.
Cheers!
Julyan
ABC in Banff
Hi all,
Last week I attended a wonderful meeting on Approximate Bayesian Computation in Banff, which gathered a nice crowd of ABC users and enthusiasts, including lots of people outside of computational stats, whom I wouldn’t have met otherwise. Christian blogged about it there. My talk on Inference with Wasserstein distances is available as a video here (joint work with Espen Bernton, Mathieu Gerber and Christian Robert, the paper is here). In this post, I’ll summarize a few (personal) points and questions on ABC methods, after recalling the basics of ABC (ahem).
Statistical inference with the Wasserstein distance
Hi! It’s been too long!
In a recent arXiv entry, Espen Bernton, Mathieu Gerber and Christian P. Robert and I explore the use of the Wasserstein distance to perform parameter inference in generative models. A byproduct is an ABCtype approach that bypasses the choice of summary statistics. Instead, one chooses a metric on the observation space. Our work fits in the minimum distance estimation framework and is particularly related to “On minimum Kantorovich distance estimators”, by Bassetti, Bodini and Regazzini. A recent and very related paper is “Wasserstein training of restricted Boltzmann machines“, by Montavon, Müller and Cuturi, who have similar objectives but are not considering purely generative models. Similarly to that paper, we make heavy use of recent breakthroughs in numerical methods to approximate Wasserstein distances, breakthroughs which were not available to Bassetti, Bodini and Regazzini in 2006.
Here I’ll describe the main ideas in a simple setting. If you’re excited about ABC, asymptotic properties of minimum Wasserstein estimators, Hilbert spacefilling curves, delay reconstructions and Takens’ theorem, or SMC samplers with rhit kernels, check our paper!
Gaussian variates truncated to a finite interval
Alan Rogers, an anthropologist at University of Utah, got in touch with me about my paper on the simulation of truncated Gaussian distributions (journal version, arxiv version). The method I proposed in this paper works for either a finite interval [a,b], or a semifinite one [a,+inf[, but my C code implements only the latter, and Alan needed the former.
Alan thus decided to reimplement my method and several others (including Christian Robert’s acceptreject algorithm proposed in this paper) in C; see here:
https://github.com/alanrogers/dtnorm
Alan also sent me this interesting plot that compares the different methods. The color of a dot at position (a,b) corresponds to the fastest method for simulating N(0,1) truncated to [a,b];
A few personal remarks:
 My method is an acceptreject algorithm, where the proposal is a mixture of uniform distributions on rectangles. The point is to have a large probability that the number of basic operations (multiplication, division) needed to return the draw is small. However, the improvement brought by such a method might be be observable only in compiled languages. In an interpreted language such as R, Matlab and Python, loops over basic operations come with a certain overhead, which might cancel any improvement. This was the experience of a colleague who tried to implement it in Julia.
 Even in C, this comparison might depend on several factors (computer, compiler, libraries, and so on). If I remember correctly, the choice of the random generator in particular may have a significant impact. (I used the GSL library which makes it easy to try different generators for the same piece of code.)
 Also bear in mind that some progress has been made for computing the inverse CDF of a unit Gaussian distribution. Hence the basic inverse CDF method, while not being the fastest approach, works reasonably well these days, especially (again) in interpreted languages. (Update: Alan tells me the inverse CDF methods remains 10 times slower for his C implementation, based on the GSL library.)
Faà di Bruno’s note on eponymous formula, trilingual version
The Italian mathematician Francesco Faà di Bruno was born in Alessandria (Piedmont, Italy) in 1825 and died in Turin in 1888. At the time of his birth, Piedmont used to be part of the Kingdom of Sardinia, led by the Dukes of Savoy. Italy was then unified in 1861, and the Kingdom of Sardinia became the Kingdom of Italy, of which Turin was declared the first capital. At that time, Piedmontese used to commonly speak both Italian and French.
Faà di Bruno is probably best known today for the eponymous formula which generalizes the derivative of a composition of two functions, , to any order:
over tuples satisfying
Faà di Bruno published his formula in two notes:
 Faà Di Bruno, F. (1855). Sullo sviluppo delle funzioni. Annali di Scienze Matematiche e Fisiche, 6:479–480. Google Books link.

Faà Di Bruno, F. (1857). Note sur une nouvelle formule de calcul différentiel. Quarterly Journal of Pure and Applied Mathematics, 1:359–360. Google Books link.
They both date from December 1855, and were signed in Paris. They are similar and essentially state the formula without a proof. I have arXived a note which contains a translation from the French version to English (reproduced below), as well as the two original notes in French and in Italian. I’ve used for this the Erasmus MMXVI font, thanks Xian for sharing! (more…)
postdoc positions at ENSAE
Hi,
interested in doing a postdoc with me on anything related to Bayesian Computation? Please let me know, as there is currently a call for postdoc grants at the ENSAE, see below.
Nicolas Chopin
The Labex ECODEC is a research consortium in Economics and Decision Sciences common to three leading French higher education institutions based in the larger Paris area: École polytechnique, ENSAE and HEC Paris. The Labex Ecodec offers:
 Oneyear postdoctoral fellowships for 20172018

Twoyear postdoctoral fellowships for 20172019
The monthly gross salary of postdoctoral fellowships is 3 000 €.
Candidates are invited to contact as soon as possible members of the research group (see below) with whom they intend to work.
Research groups concerned by the call:
Area 1: Secure Careers in a Global Economy
Area 2: Financial Market Failures and Regulation
Area 3: Product Market Regulation and Consumer DecisionMaking
Area 4: Evaluating the Impact of Public Policies and Firms’ Decisions
Area 5: New Challenges for New Data
Details of axis can be found on the website:
Deadlines for application:
31^{st} December 2016
Screening of applications and decisions can be made earlier for srong candidates who need an early decision.
The application should be sent to application@labexecodec.fr in PDF. Please mention the area number on which you apply in the subject.
The application package includes:
 A cover letter with the name of a potential supervisor among the group;
 A research statement;
 A letter from the potential supervisor in support of the project;
 A Curriculum vita (with the address of the candidate, phone and email contact).
 The Ph.D. dissertation or papers/preprint;
 Reference letters, including one from the PhD advisor. A letter from a member of the research group with whom the candidate is willing to interact will be appreciated.
Please note that HEC, Genes, and X PhD students are not eligible to apply for this call.
Selection will be based on excellence and a research project matching the group’s research agenda.
Area 1 “Secure careers in a Global Economy”: Pierre Cahuc (ENSAE), Dominique Rouziès (HEC), Isabelle Méjean (École polytechnique)
Area 2: “Financial Market Failures and Regulation”: François Derrien (HEC), JeanDavid Fermanian, (ENSAE) Edouard Challe (École polytechnique)
Area 3: “DecisionMaking and Market Regulation”: Nicolas Vieille (HEC), Philippe Choné (ENSAE), MarieLaure Allain (École polytechnique)
Area 4: “Evaluating the Impact of Public Policies and Firm’s Decisions”: Bruno Crépon (ENSAE), Yukio Koriyama (École polytechnique), Daniel Halbheer (HEC)
Area 5: “New Challenges for New Data”: Anna Simoni (ENSAE), Gilles Stoltz (HEC)
1 comment