Some thoughts on the life of a mathematician, by Villani

Posted in General by Julyan Arbel on 3 November 2015

villani_turinSome time ago, Cédric Villani came to Turin for delivering two talks. One intended for youngsters (high school level say), another one for a wider audience, as a recipient of the Peano Prize. He commented on live, in Italian per favore:

“Grazie mille! Un grande piacere e un grande onore per me!”

I attended both. The reason why I attended the first being that I am acting as a research advisor for Math en Jeans groups. Villani spoke about his book, Birth of a Theorem, or Théorème Vivant. He also shared a list of se7en thoughts/tips about doing research, with illustrations. I find them quite inspiring, here they are.

  1. Documentation/literature
    Illustrating this by showing Faà di Bruno’s formula Wikipedia page. I like this quote, since the formula enters moment computation for objects I’m using everyday. And also because Faà di Bruno lived in Italian Piedmont, precisely in Turin.
  2. Motivation
    “The most important and the most mysterious.”
  3. Favorable environment
    Showing pictures of several places where he worked, including Institut Henri Poincaré. Not sure that this one is the most favorable environment for scientific productivity (as a Director I mean).
  4. Exchanges
    Meaning between scientists, not trade. Explaining briefly about polymath projects. And displaying a snapshot of Gowers’s Weblog as an illustration of how diverse exchanges he means. I also believe that blogs are a great information medium :)
  5. Constraints
    With snapshots of Musica Ricercata sheet music. And a paragraph of La disparition, a novel without the letter e by Georges Perec. Writing this makes me realize how foolish such an enterprise would look like in mathematics.
  6. Work & Intuition
    Interesting to see these two at the same level.
  7. Perseverance & Luck
    Same comment as for point 6.


El Capitan OS X and LaTeX

Posted in LaTeX by Julyan Arbel on 30 October 2015


El Capitan is a very nice mountain. It’s also the latest OS X version which messes things up with \LaTeX. Be aware of this before you update. I wasn’t!

I quote from a fix explained here:

Under OS X 10.11, El Capitan, writing to “/usr” is no longer allowed, even with Administrator privileges. The usual symbolic link to the active \TeX Distribution, “/usr/texbin”, is therefore removed (if it was there from a previous OS version) and cannot be installed. Many GUI applications have the path to those binaries set to “/usr/texbin” by default and will no longer find the binaries there.

I had to reinstall MacTex, then to update my GUI application (texmaker) for \LaTeX and finally to replace every “/usr/texbin” by “/Library/TeX/texbin”, as shown below.




Tagged with: , ,

Why shrinking priors shrink ?

Posted in General, Statistics by JB Salomond on 19 October 2015


Hello there !

While I was in Amsterdam, I took the opportunity to go and work with the Leiden crowd, an more particularly with Stéphanie van der Pas and Johannes Schmidt-Heiber. Since Stéphanie had already obtained neat results for the Horseshoe prior and Johannes had obtained some super cool results for the spike and slab prior, they were the fist choice to team up with to work on sparse models. And guess what ? we have just ArXived a paper in which we study the sparse Gaussian sequence

X_i = \theta_i + \epsilon_i, \quad \epsilon_i \sim \mathcal{N}(0,1), \quad i=1,...,n,

where only a small number  p_n \ll n of  \theta_i are non zero.

There is a rapidly growing literature on shrinking priors for such models, just look at Polson and Scott (2012), Caron and Doucet (2008), Carvalho, Polson, and Scott (2010) among many, many others, or simply have a look at the program of the last BNP conference. There is also an on growing literature on theoretical properties of some of these priors. The Horseshoe prior was studied in Pas, Kleijn, and Vaart (2014), an extention of the Horseshoe was then study in Ghosh and Chakrabarti (2015), and recently, the spike and slab Lasso was studied in Rocková (2015) (see also Xian ’Og)

All these results are super nice, but still we want to know why do some shinking priors shrink so well and others do not?! As we are all mathematicians here, I will reformulate this last question: What would be the conditions on the prior under which the posterior contracts at the minimax rate1 ?

We considered a Gaussian scale mixture prior on the sequence (\theta_i)

\theta_i \sim p(\theta_i) = \int \frac{e^{-\theta_i^2/(2\sigma^2)}}{\sqrt{2\pi \sigma^2}} \pi(\sigma^2) d\sigma^2

since this family of priors encomparse all the ones studied in the papers mentioned above (and more), so it seemed to be general enough.

Our main contribution is to give conditions on \pi such that the posterior converge at the good rate. We showed that in order to recover the parameter  \theta_i that are non-zeros, the prior should have tails that decays at most exponentially fast, which is similar to the condition impose for the Spike and Slab prior. Another expected condition is that the prior should put enough mass around 0, since our assumption is that the vector of parameter  \theta is nearly black i.e. most of its components are 0.

More surprisingly, in order to recover 0 parameters correctly, one also need some conditions on the tail of the prior. More specifically, the prior’s tails cannot be too big, and if they are, we can then construct a prior that puts enough mass near 0 but which does not concentrate at the minimax rate.

We showed that these conditions are satisfied for many priors including the Horseshoe, the Horseshoe+, the Normal-Gamma and the Spike and Slab Lasso.

The Gaussian scale mixture are also quite simple to use in practice. As explained in Caron and Doucet (2008) a simple Gibbs sampler can be implemented to sample from the posterior. We conducted simulation study to evaluate the sharpness of our conditions. We computed the \ell_2 loss for the Laplace prior, the global-local scale mixture of gaussian (called hereafter bad prior for simplicity), the Horseshoe and the Normal-Gamma prior. The first two do not satisfy our condition, and the last two do. The results are reported in the following picture.


As we can see, priors that do and do not satisfy our condition show different behaviour (it seems that the priors that do not fit our conditions have a \ell_2 risk larger than the minimax rate of a factor of n). This seems to indicate that our conditions are sharp.

At the end of the day, our results expands the class of shrinkage priors with theoretical guarantees for the posterior contraction rate. Not only can it be used to obtain the optimal posterior contraction rate for the horseshoe+, the inverse-Gaussian and normal-gamma priors, but the conditions provide some characterization of properties of sparsity priors that lead to desirable behaviour. Essentially, the tails of the prior on the local variance should be at least as heavy as Laplace, but not too heavy, and there needs to be a sizable amount of mass around zero compared to the amount of mass in the tails, in particular when the underlying mean vector grows to be more sparse.


Caron, François, and Arnaud Doucet. 2008. “Sparse Bayesian Nonparametric Regression.” In Proceedings of the 25th International Conference on Machine Learning, 88–95. ICML ’08. New York, NY, USA: ACM.

Carvalho, Carlos M., Nicholas G. Polson, and James G. Scott. 2010. “The Horseshoe Estimator for Sparse Signals.” Biometrika 97 (2): 465–80.

Ghosh, Prasenjit, and Arijit Chakrabarti. 2015. “Posterior Concentration Properties of a General Class of Shrinkage Estimators Around Nearly Black Vectors.”

Pas, S.L. van der, B.J.K. Kleijn, and A.W. van der Vaart. 2014. “The Horseshoe Estimator: Posterior Concentration Around Nearly Black Vectors.” Electron. J. Stat. 8: 2585–2618.

Polson, Nicholas G., and James G. Scott. 2012. “Good, Great or Lucky? Screening for Firms with Sustained Superior Performance Using Heavy-Tailed Priors.” Ann. Appl. Stat. 6 (1): 161–85.

Rocková, Veronika. 2015. “Bayesian Estimation of Sparse Signals with a Continuous Spike-and-Slab Prior.”

  1. For those wondering why the heck with minimax rate here, just remember that a posterior that contracts at the minimax rate induces an estimator which converge at the same rate. It also gives us that confidence region will not be too large.

Leave the Pima indians alone: the R package

Posted in General by nicolaschopin on 4 September 2015

Hi there,

while everyone was away in July, James Ridgway and I posted our “leave (the) pima paper alone” paper on arxiv, in which we discuss to which extent probit/logit regression and not too big datasets (such as the now famous Pima Indians dataset) constitute a relevant benchmark for Bayesian computation.

The actual title of the paper is “Leave Pima Indians alone…”, but xian changed it to “Leave *the* Pima Indians alone…” when discussing it on his blog. Any opinion on whether it does sound better with “the”?

Man in the maze, the official symbol of the Pima tribe

Man in the maze, the official symbol of the Pima tribe; perhaps a metaphor for slow convergence of certain MCMC schemes

On a different note, one of our findings is that Expectation-Propagation works wonderfully for such models; yes it is an approximate method, but it is very fast, and the approximation error is consistently negligible on all the datasets we looked at.

James has just posted on CRAN the EPGLM package, which computes an EP approximation of the posterior of a logit or probit model. The documentation is a bit terse at the moment, but it is very straightforward to use.

Comments on the package, the paper, its grammar or Pima Indians are most welcome!

Who has the biggest in Bayesian Nonparametrics?

Posted in General by Julyan Arbel on 2 September 2015
A graph of balls

A graph of balls

This very fine title quotes a pretty hilarious banquet speech by David Dunson at the last BNP conference held in Raleigh last June. The graph is by François Caron who used it in his talk there. See below for his explanation.

After the summer break, back to work. The academic year to come looks promising from a BNP point of view. Not least that three special issues have been announced, in Statistics & Computing (guest editors: Tamara Broderick (MIT), Katherine Heller (Duke), Peter Mueller (UT Austin)), the Electronic Journal of Statistics (guest editor: Subhashis Ghoshal (NCSU)), and in the International Journal of Approximate Reasoning (proposal deadline December 1st, guest editors: Alessio Benavoli (Lugano), Antonio Lijoi (Pavia) and Antonietta Mira (Lugano)).

BNP is also going to infiltrate MCMSki V, Lenzerheide, Switzerland, January 4-7 2016, with three sessions with a BNP flavor, in addition to plenary speakers David Dunson and Michael Jordan. The International Society for Bayesian Analysis World Meeting, 13 -17 June, 2016, should also host plenty of BNP sessions. And a De Finetti Lecture by Persi Diaconis (Stanford University). (more…)

slides of SMC2015

Posted in General by nicolaschopin on 31 August 2015


Adam Johansen, Thomas Schön and me co-organised SMC2015, a workshop on Sequential Monte Carlo method that took place at ENSAE last week. In case you missed it, I’ve just uploaded the slides of most talks here. Enjoy!

Turing revisited in Turin, and Oxford

Posted in General by Julyan Arbel on 18 June 2015
Are you paying attention? Good. If you are not listening carefully, you will miss things.

Are you paying attention? Good. If you are not listening carefully, you will miss things. Important things.

With colleagues Stefano Favaro and Bernardo Nipoti from Turin and Yee Whye Teh from Oxford, we have just arXived an article on discovery probabilities. If you are looking for some info on a space shuttle, a cycling team or a TV channel, it’s the wrong place. Instead, discovery probabilities are central to ecology, biology and genomics where data can be seen as a population of individuals belonging to an (ideally) infinite number of species. Given a sample of size n, the l-discovery probability D_{n}(l) is the probability that the next individual observed matches a species with frequency l in the n-sample. For instance, the probability of observing a new species D_{n}(0) is key for devising sampling experiments.

By the way, why Alan Turing? Because with his fellow researcher at Bletchley Park Irving John Good, starred in The Imitation Game too, Turing is also known for the so-called Good-Turing estimator of the discovery probability


which involves m_{l+1,n}, the number of species with frequency l+1 in the sample (ie frequencies frequency, if you follow me). As it happens, this estimator defined in Good 1953 Biometrika paper became wildly popular among ecology-biology-genomics communities since then, at least in the small circles where wild popularity and probability aren’t mutually exclusive.

Simple explicit estimators \hat{\mathcal{D}}_{n}(l) of discovery probabilities in the Bayesian nonparametric (BNP) framework of Gibbs-type priors were given by Lijoi, Mena and Prünster in a 2007 Biometrika paper. The main difference between the two estimators of D_{n}(l) is that Good-Turing involves n and m_{l+1,n} only, while the BNP involves n, m_{l,n} (instead of m_{l+1,n}), and k_n, the total number of observed species. It has been shown in the literature that the BNP estimators are more reliable than Good-Turing estimators.

How do we contribute? (i) we describe the posterior distribution of the discovery probabilities in the BNP model, which is pretty useful for deriving exact credible intervals of the estimates, and (ii) we investigate large n asymptotic behavior of the estimators.


Who is Julia ?

Posted in General by JB Salomond on 4 June 2015

Hi there !

Unfortunately this post is indeed about statistics…

If you are randomly walking around the statistics blogs, you probably have certainly heard of this new language called Julia. It is said by the developers to be as easy to write as R and as fast as C (!) which is quite a catchy way of selling their work. After talking with a Julia enthusiastic user in Amsterdam, I decided to give it a try. And here I am sharing my first impressions.

Fist thing first, the installation is as easy as any other language, plus there is a neat Package management that allows you to get started quite easily. In this respect it is very similar to R.
On the minus side I became a big fan of RStudio Julian (… oupsy Julyan) told you about a long time ago. These kind of programs really make your life easier. I thus tried Juno which turned out to be cumbersome and terribly slow. I would have loved to have an IDE for Julia that would be up to the RStudio standard. Nevermind.

No lets talk a little about what is really interesting : “Is their catch phrase false advertising or not?!”.

There is a bunch of relatively good tutorials online which are really helpful to learn the basic vocabulary, but indeed if like me you are use to code in R and/or Python, you should get it pretty fast and can almost copy-paste your favourite code into Julia and with a few adjustments, it will work. So as easy to write as R : quite so.

I then tried to compare computational times for some of my latest codes and there came the good surprise ! A code that would take a handful of minutes to run in R mainly due to unavoidable loops took a couple of seconds to run in Julia, without any other sorts of optimization. The handling of big objects is smooth and I did not ran into memory problems that R was suffering from.

So far so good ! But of course there has to be some drawbacks. The first one is the poor package repository compare to CRAN or even what you can get for Python. This might of course improve in the next few years as the language is still quite new. However, it is bothering to have to re-code something when you are used to simply load a package in R. Another, probably less important problem, is the lack of data visualization methods and especially the absence of ggplot2 that we have grown quite found of around here. There is of course Gadfly, which is quite close but once again, it is up to now very limited compared to what I was used to…

All in all, I am happy to have tried Julia, and I am quite sure that I will be using it quite a lot from now on. However, even if from a efficiency point of view, it is great, and it is way easier to learn than C (which I should have done a while ago), R and its tremendous package repository is far from beaten.

Oh and by the way, it uses PyPlot based on MatplotLib that allow you to make some xkcd-like plots, which can make your presentations a lot more fun.

Sequential Bayesian inference for time series

Posted in Statistics by Pierre Jacob on 19 May 2015
Bayes factor between two hidden Markov models

Bayes factor between two hidden Markov models, against number of assimilated observations. Values near zero support the simpler model while values larger than one support the more complex model.

Hello hello,

I have just arXived a review article, written for ESAIM: Proceedings and Surveys, called Sequential Bayesian inference for implicit hidden Markov models and current limitations. The topic is sequential Bayesian estimation: you want to perform inference (say, parameter inference, or prediction of future observations), taking into account parameter and model uncertainties, using hidden Markov models. I hope that the article can be useful for some people: I have tried to stay at a general level, but there are more than 90 references if you’re interested in learning more (sorry in advance for not having cited your article on the topic!).  Below I’ll comment on a few points.


Reading Bayesian classics — presentations

Posted in General by Julyan Arbel on 21 April 2015

The students did a great job in presenting some Bayesian classics. I enjoyed reading the papers (pdfs can be found here), most of which I hadn’t read before, and enjoyed also the students’ talks. I share here some of the best ones, as well as some demonstrative excerpts from the papers. In chronological order (presentations on slideshare below):

  • W. Keith Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1):97–109, 1970.

In this paper, we shall consider Markov chain methods of sampling that are generalizations of a method proposed by Metropolis et al. (1953), which has been used extensively for numerical problems in statistical mechanics.

  • Dennis V. Lindley and Adrian F.M. Smith. Bayes estimates for the linear model. Journal of the Royal Statistical Society: Series B (Statistical Methodology), with discussion, 1–41, 1972.

From Prof. B. de Finetti discussion (note the valliant collaborator Smith!):

I think that the main point to stress about this interesting and important paper is its significance for the philosophical questions underlying the acceptance of the Bayesian standpoint as the true foundation for inductive reasoning, and in particular for statistical inference. So far as I can remember, the present paper is the first to emphasize the role of the Bayesian standpoint as a logical framework for the analysis of intricate statistical situation. […] I would like to express my warmest congratulations to my friend Lindley and his valiant collaborator Smith.



Get every new post delivered to your Inbox.

Join 63 other followers

%d bloggers like this: