## ABC in Banff

Hi all,

Last week I attended a wonderful meeting on Approximate Bayesian Computation in Banff, which gathered a nice crowd of ABC users and enthusiasts, including lots of people outside of computational stats, whom I wouldn’t have met otherwise. Christian blogged about it there. My talk on Inference with Wasserstein distances is available as a video here (joint work with Espen Bernton, Mathieu Gerber and Christian Robert, the paper is here). In this post, I’ll summarize a few (personal) points and questions on ABC methods, after recalling the basics of ABC (ahem).

## momentify R package at BAYSM14

I presented an arxived paper of my postdoc at the big success Young Bayesian Conference in Vienna. The big picture of the talk is simple: there are situations in Bayesian nonparametrics where you don’t know how to sample from the posterior distribution, but you can only compute posterior expectations (so-called *marginal methods*). So e.g. you cannot provide credible intervals. But sometimes all the moments of the posterior distribution are available as posterior expectations. So morally, you should be able to say more about the posterior distribution than just reporting the posterior mean. To be more specific, we consider a hazard (h) mixture model

where is a kernel, and the mixing distribution is random and discrete (Bayesian nonparametric approach).

We consider the survival function which is recovered from the hazard rate by the transform

and some possibly censored survival data having survival . Then it turns out that all the posterior moments of the survival curve evaluated at any time can be computed.

The nice trick of the paper is to use the representation of a distribution in a [Jacobi polynomial] basis where the coefficients are linear combinations of the moments. So one can sample from [an approximation of] the posterior, and with a posterior sample we can do everything! Including credible intervals.

I’ve wrapped up the few lines of code in an R package called momentify (not on CRAN). With a sequence of moments of a random variable supported on [0,1] as an input, the package does two things:

- evaluates the approximate density
- samples from it

A package example for a mixture of beta and 2 to 7 moments gives that result:

## MCM’Ski lessons

A few days after the MCMSki conference, I start to see the main lessons gathered there.

- I should really read the full program before attending the next MCMSki. The three parallel sessions looked consistently interesting, and I really regret having missed some talks (in particular Dawn Woodard‘s and Natesh Pillai‘s) and some posters as well (admittedly, due to exhaustion on my part).
- Compared to the previous instance three years ago (in Utah), the main themes have significantly changed. Scalability, approximate methods, non-asymptotic results, 1/n methods … these keywords are now on everyone’s lips. Can’t wait to see if MCQMC’14 will feel that different from MCQMC’12.
- The community is rightfully concerned about scaling Monte Carlo methods to big data, with some people pointing out that models should also be rethought in this new context.
- The place of software developers in the conference, or simply references to software packages in the talks, is much greater than it used to be. It’s a very good sign towards reproducible research in our field. There’s still a lot of work to do, in particular in terms of making parallel computing easier to access (time to advertise LibBi a little bit). On a related note, many people now point out whether their proposed algorithms are parallel-friendly or not.
- Going from the Rockies to the Alps, the food drastically changed from cheeseburgers to just melted cheese. Bread could be found but ground beef and Budweiser were reported missing.
- It’s fun to have an international conference in your home country, but switching from French to English all the time was confusing.

Back in flooded Oxford now!

## Joint Statistical Meeting 2013

Hey,

In a few weeks (August 3-8) I’ll attend the Joint Statistical Meeting in Montréal, Canada. According to Wikipedia it’s been held every year since 1840 and now gathers more than 5,000 participants!

I’ll talk in a session organized by Scott Schmidler, entitled *Adaptive Monte Carlo Methods for Bayesian Computation*; you can find the session programme here [online program]. I’ll talk about score and Fisher observation matrix estimation in state-space models.

According to the rumour and Christian’s reflections on the past years (2009, 2010, 2011), I should prepare my schedule in advance to really enjoy this giant meeting. So if you want to meet there, please send me an e-mail!

See you in Montréal!

## Back from ISBA Regional Meeting in India

Hello everyone,

and of course Happy New Year (2013 is the international year of statistics!).

Last week the ISBA Regional Meeting was held in Banaras / Varanasi, in the North of India. The conference was well attended, with leading figures such as Jayanta K. Ghosh, José Bernardo, James Berger, Peter Green, Christian Robert who blogged about it, and an overall ~350 participants.

## A glimps of Inverse Problems

Hi folks !

Last Tuesday a seminar on Bayesian procedure for inverse problems took place at CREST. We had time for two presentations of young researchers Bartek Knapik and Kolyan Ray. Both presentations deal with the problem of observing a noisy version of a linear transform of the parameter of interest

where is a linear operator and a Gaussian white noise. Both presentations considered asymptotic properties of the posterior distribution (Their papers can be found on arxiv, here for Bartek’s, and here for Kolyan’s). There is a wide literature on asymptotic properties of the posterior distribution in direc models. When looking at the concentration of toward a *true* distribution given the data, with respect to some distance , well known problem is to derive concentration rates, that is the rate such that

For inverse problems, the usual methods as introduced by Ghosal, Ghosh and van der Vaart (2000) usually fails, and thus results in this settings are in general difficult to obtain.

Bartek presented some very refined results in the conjugate case. He manages to get some results on the concentration rates of the posterior distribution, on Bayesian Credible Sets and Bernstein – Von Mises theorems – that states that the posterior is asymptotically Gaussian – when estimating a linear functional of the parameter of interest. Kolyan got some general conditions on the prior to achieve concentration rate, and prove that these techniques leads to optimal concentration rates for classical models.

I only knew little about inverse problems but both talks were very accessible and I will surely get more involved in this field !

## Recent Advances in Sequential Monte Carlo / Warwick 2012

Hello,

This blog is not dead! And it’s gonna get more active soon.

These last few days, a workshop on Sequential Monte Carlo methods was held in the University of Warwick (link to the webpage). It was a very exciting meeting, efficiently organised by Arnaud Doucet, Adam Johansen, Anthony Lee and Murray Pollock and hosted by CRiSM. For those who couldn’t attend, here’s a little summary of my experience (or more exactly, just a bunch of links). Since SMC methods are at the core of my research, I was logically interested by all the talks (which is exceptional for 3 days of workshop, filled with 30 talks!). It was probably a good time for a workshop on SMC, since there’s a lot of recent activity in the field. My impression is that this renewed interest is mainly due to:

- the Particle MCMC framework, which inspired other algorithms (like our SMC^2 that Nicolas Chopin presented, or the Particle Gibbs with ancestral sampling), renewed interest in the normalising constant estimate in particle filters, and more generally made it possible to estimate parameters in a broad class of time series models with plenty of diverse applications (probably more than 200 citations already);
- theoretical advances, with recent papers from Pierre Del Moral, Nick Whiteley, Ajay Jasra, and many others;
- a new class of exact algorithms for simulating continuous time processes without any discretisation error, based on sequential importance sampling, in a fascinating work by Paul Fearnhead, Krzysztof Łatuszyński and colleagues.
- parallel computing, including the recent GPU trend, which makes SMC all the more attractive compared to purely iterative algorithms like MCMC.

The last point was illustrated at this workshop by a recent work from Alexandre Bouchard-Côté and colleagues called “Entangled Monte Carlo”, as well as by my own presentation: I talked about a new resampling scheme that avoids global interactions between all the particles, and resorts only to multiple pair-wise interactions. This is an on-going work with Pierre Del Moral, Anthony Lee, Lawrence Murray and Gareth Peters, that I might talk about again with more details in the future!

Cheers!

## Priors on probability measures

Hi,

for the next GTB meeting at Crest, 3rd May, I will present Peter Orbanz‘ work on Projective limit random probabilities on Polish spaces. It will follow my previous presentation about Bayesian nonparametrics on the Dirichlet process.

The article provides a means of constructing any arbitrary prior distribution on the set of probability measures by working on its finite-dimensional marginals. The vanilla example is the Dirichlet process, which is characterized by its Dirichlet distribution marginals on any finite partition of the space (other examples are the Normalized Inverse Gaussian Process and the Pòlya Tree). The figure above illustrates the projective property of the marginals.

Peter will speak at * ISBA 2012 Kyoto *session* : **On the uses of random probabilities in Bayesian inference*, along with Ramses Mena and Antonio Lijoi. I’ll write more about that later on!

## Awesome Bristol

Hey,

Last week there was a workshop on *Confronting Intractability in Statistical Inference*, organised by the University of Bristol and the SuSTain group. It was hosted at the Goldney Hall (picture above). It turned out to be a succession of fascinating talks about the recent developments and the future of statistical methods used in very challenging inference problems. What I appreciated above all was the ambition of many talks, and the generosity of the speakers in giving many ideas to the audience.

Among the things I’ve learned there, the following were the most ambitious in my opinion.

## Rochebrune Workshop 2012

Hey,

Last week I attended Rochebrune workshop for the second time. The genius organizers’ idea (Liliane Bel and Eric Parent from AgroParisTech, Jean-Jacques Borreux from Liège University) is to mix *ski, stats and spirits* (mostly Genepi and Chartreuse) around a remote alpine chalet on top of Megève ski resort.

Most of the attendees are (young) Bayesians working in applied fields, ranging from biology, ecology and epidemiology, to meteorology and climatology. We had great talks about fishes, trees, birds (Joël’s busard cendré), drugs and avalanches. More methodological talks dealt with extremes, Bayesian model averaging, and simulations: variational approximations, INLA, ABC, and MCMC in general. We had a tutorial about JAGS and WinBugs/OpenBugs as well (and how to interface them with R using rjags and R2WinBUGS). I presented my work about Multidimensional covariate dependent Dirichlet processes (all presentations here).

In addition to the 10 talks per day, a sacrosanct 5-hour skiing slot was reserved in the afternoon, with lessons from crazy Mégevan instructors. They must be really good: Pierre, don’t be afraid, I jumped and felt significantly less than two years ago. Have a Chartreuse, cheers!

leave a comment