Why shrinking priors shrink ?

Posted in General, Statistics by JB Salomond on 19 October 2015


Hello there !

While I was in Amsterdam, I took the opportunity to go and work with the Leiden crowd, an more particularly with Stéphanie van der Pas and Johannes Schmidt-Heiber. Since Stéphanie had already obtained neat results for the Horseshoe prior and Johannes had obtained some super cool results for the spike and slab prior, they were the fist choice to team up with to work on sparse models. And guess what ? we have just ArXived a paper in which we study the sparse Gaussian sequence

X_i = \theta_i + \epsilon_i, \quad \epsilon_i \sim \mathcal{N}(0,1), \quad i=1,...,n,

where only a small number  p_n \ll n of  \theta_i are non zero.

There is a rapidly growing literature on shrinking priors for such models, just look at Polson and Scott (2012), Caron and Doucet (2008), Carvalho, Polson, and Scott (2010) among many, many others, or simply have a look at the program of the last BNP conference. There is also an on growing literature on theoretical properties of some of these priors. The Horseshoe prior was studied in Pas, Kleijn, and Vaart (2014), an extention of the Horseshoe was then study in Ghosh and Chakrabarti (2015), and recently, the spike and slab Lasso was studied in Rocková (2015) (see also Xian ’Og)

All these results are super nice, but still we want to know why do some shinking priors shrink so well and others do not?! As we are all mathematicians here, I will reformulate this last question: What would be the conditions on the prior under which the posterior contracts at the minimax rate1 ?

We considered a Gaussian scale mixture prior on the sequence (\theta_i)

\theta_i \sim p(\theta_i) = \int \frac{e^{-\theta_i^2/(2\sigma^2)}}{\sqrt{2\pi \sigma^2}} \pi(\sigma^2) d\sigma^2

since this family of priors encomparse all the ones studied in the papers mentioned above (and more), so it seemed to be general enough.

Our main contribution is to give conditions on \pi such that the posterior converge at the good rate. We showed that in order to recover the parameter  \theta_i that are non-zeros, the prior should have tails that decays at most exponentially fast, which is similar to the condition impose for the Spike and Slab prior. Another expected condition is that the prior should put enough mass around 0, since our assumption is that the vector of parameter  \theta is nearly black i.e. most of its components are 0.

More surprisingly, in order to recover 0 parameters correctly, one also need some conditions on the tail of the prior. More specifically, the prior’s tails cannot be too big, and if they are, we can then construct a prior that puts enough mass near 0 but which does not concentrate at the minimax rate.

We showed that these conditions are satisfied for many priors including the Horseshoe, the Horseshoe+, the Normal-Gamma and the Spike and Slab Lasso.

The Gaussian scale mixture are also quite simple to use in practice. As explained in Caron and Doucet (2008) a simple Gibbs sampler can be implemented to sample from the posterior. We conducted simulation study to evaluate the sharpness of our conditions. We computed the \ell_2 loss for the Laplace prior, the global-local scale mixture of gaussian (called hereafter bad prior for simplicity), the Horseshoe and the Normal-Gamma prior. The first two do not satisfy our condition, and the last two do. The results are reported in the following picture.


As we can see, priors that do and do not satisfy our condition show different behaviour (it seems that the priors that do not fit our conditions have a \ell_2 risk larger than the minimax rate of a factor of n). This seems to indicate that our conditions are sharp.

At the end of the day, our results expands the class of shrinkage priors with theoretical guarantees for the posterior contraction rate. Not only can it be used to obtain the optimal posterior contraction rate for the horseshoe+, the inverse-Gaussian and normal-gamma priors, but the conditions provide some characterization of properties of sparsity priors that lead to desirable behaviour. Essentially, the tails of the prior on the local variance should be at least as heavy as Laplace, but not too heavy, and there needs to be a sizable amount of mass around zero compared to the amount of mass in the tails, in particular when the underlying mean vector grows to be more sparse.


Caron, François, and Arnaud Doucet. 2008. “Sparse Bayesian Nonparametric Regression.” In Proceedings of the 25th International Conference on Machine Learning, 88–95. ICML ’08. New York, NY, USA: ACM.

Carvalho, Carlos M., Nicholas G. Polson, and James G. Scott. 2010. “The Horseshoe Estimator for Sparse Signals.” Biometrika 97 (2): 465–80.

Ghosh, Prasenjit, and Arijit Chakrabarti. 2015. “Posterior Concentration Properties of a General Class of Shrinkage Estimators Around Nearly Black Vectors.”

Pas, S.L. van der, B.J.K. Kleijn, and A.W. van der Vaart. 2014. “The Horseshoe Estimator: Posterior Concentration Around Nearly Black Vectors.” Electron. J. Stat. 8: 2585–2618.

Polson, Nicholas G., and James G. Scott. 2012. “Good, Great or Lucky? Screening for Firms with Sustained Superior Performance Using Heavy-Tailed Priors.” Ann. Appl. Stat. 6 (1): 161–85.

Rocková, Veronika. 2015. “Bayesian Estimation of Sparse Signals with a Continuous Spike-and-Slab Prior.”

  1. For those wondering why the heck with minimax rate here, just remember that a posterior that contracts at the minimax rate induces an estimator which converge at the same rate. It also gives us that confidence region will not be too large.

Sequential Bayesian inference for time series

Posted in Statistics by Pierre Jacob on 19 May 2015
Bayes factor between two hidden Markov models

Bayes factor between two hidden Markov models, against number of assimilated observations. Values near zero support the simpler model while values larger than one support the more complex model.

Hello hello,

I have just arXived a review article, written for ESAIM: Proceedings and Surveys, called Sequential Bayesian inference for implicit hidden Markov models and current limitations. The topic is sequential Bayesian estimation: you want to perform inference (say, parameter inference, or prediction of future observations), taking into account parameter and model uncertainties, using hidden Markov models. I hope that the article can be useful for some people: I have tried to stay at a general level, but there are more than 90 references if you’re interested in learning more (sorry in advance for not having cited your article on the topic!).  Below I’ll comment on a few points.


Statistics journals network

Posted in General, R, Statistics by Julyan Arbel on 16 April 2015
Statistical journals friendship (clic for SVG format)

Statistical journals friendship (clic for SVG format)

Xian blogged recently on the incoming RSS read paper: Statistical Modelling of Citation Exchange Between Statistics Journals, by Cristiano Varin, Manuela Cattelan and David Firth. Following the last JRSS B read paper by one of us! The data that are used in the paper (and can be downloaded here) are quite fascinating for us, academics fascinated by academic rankings, for better or for worse (ironic here). They consist in cross citations counts C = (C_{ij}) for 47 statistics journals (see list and abbreviations page 5): C_{ij} is the number of citations from articles published in journal j in 2010 to papers published in journal i in the 2001-2010 decade. The choice of the list of journals is discussed in the paper. Major journals missing include Bayesian Analysis (published from 2006), The Annals of Applied Statistics (published from 2007).

I looked at the ratio of Total Citations Received by Total Citations made. This is a super simple descriptive statistic which happen to look rather similar to Figure 4 which plots Export Scores from Stigler model (can’t say more about it, I haven’t read in detail). The top five is the same modulo the swap between Annals of Statistics and Biometrika. Of course a big difference is that the Cited/Citation ratio isn’t endowed with a measure of uncertainty (below, left is my making, right is Fig. 4 in the paper).


I was surprised not to see a graph / network representation of the data in the paper. As it happens I wanted to try the gephi software for drawing graphs, used for instance by François Caron and Emily Fox in their sparse graphs paper. I got the above graph, where:

  • for the data, I used the citations matrix C renormalized by the total number of citations made, which I denote by \tilde C. This is a way to account for the size (number of papers published) of the journal. This is just a proxy though since the actual number of papers published by the journal is not available in the data. Without that correction, CSDA is way ahead of all the others.
  • the node size represents the Cited/Citing ratio
  • the edge width represents the renormalized \tilde C_{ij}. I’m unsure of what gephi does here, since it converts my directed graph into an undirected graph. I suppose that it displays only the largest of the two edges \tilde C_{ij} and \tilde C_{ji}.
  • for a better visibility I kept only the first decile of heaviest edges.
  • the clusters identified by four colors are modularity classes obtained by the Louvain method.

Some remarks

The two software journals included in the dataset are quite outliers:

  • the Journal of Statistical Software (JSS) is disconnected from the others, meaning it has no normalized citations \tilde C_{ij} in the first decile. Except from its self citations which are quite big and make it the 4th Impact Factor from the total list in 2010 (and apparently the first in 2015).
  • the largest \tilde C_{ij} is the self citations of the STATA Journal (StataJ).


  • CSDA is the most central journal in the sense of the highest (unweighted) degree.

Some further thoughts

All that is just for the fun of it. As mentioned by the authors, citation counts are heavy-tailed, meaning that just a few papers account for much of the citations of a journal while most of the papers account for few citations. As a matter of fact, the total of citations received is mostly driven by a few super-cited papers, and also is the Cited/Citations matrix \tilde C that I use throughout for building the graph. A reason one could put forward about why JRSS B makes it so well is the read papers: for instance, Spiegelhalter et al. (2002), DIC, received alone 11.9% of all JRSS B citations in 2010. Who’d bet the number of citation this new read paper (JRSS A though) will receive?

Bayesian classics

Posted in Statistics by Julyan Arbel on 17 March 2015

Collegio Carlo Alberto in a sunny day

This week I’ll start my Bayesian Statistics master’s course at the Collegio Carlo Alberto. I realized that some of last year students got PhD positions in prestigious US universities. So I thought that letting this year’s students have a first grasp of some great Bayesian papers wouldn’t do harm. The idea is that in addition to the course, the students will pick a paper from a list and present it (or rather part of it) to the others and to me. Which will let them earn some extra points for the final exam mark. It’s in the spirit of Xian’s Reading Classics Seminar (his list here).

I’ve made up the list below, inspired by two textbooks references lists and biased by personal tastes: Xian’s Bayesian Choice and Peter Hoff’s First Course in Bayesian Statistical Methods. See the pdf list and zipped folder for papers. Comments on the list are much welcome!


PS: reference n°1 isn’t a joke!

Tagged with:

[Meta-]Blogging as young researchers

Posted in General, Statistics by Pierre Jacob on 11 December 2014

Summary of the article.

Hello all,

This is an article intended for the ISBA bulletin, jointly written by us all at Statisfaction, Rasmus Bååth from Publishable Stuff, Boris Hejblum from Research side effects, Thiago G. Martins from tgmstat@wordpress, Ewan Cameron from Another Astrostatistics Blog and Gregory Gandenberger from

Inspired by established blogs, such as the popular Statistical Modeling, Causal Inference, and Social Science or Xi’an’s Og, each of us began blogging as a way to diarize our learning adventures, to share bits of R code or LaTeX tips, and to advertise our own papers and projects. Along the way we’ve come to a new appreciation of the world of academic blogging: a never-ending international seminar, attended by renowned scientists and anonymous users alike. Here we share our experiences by weighing the pros and cons of blogging from the point of view of young researchers.


momentify R package at BAYSM14

Posted in General, R, Seminar/Conference, Statistics by Julyan Arbel on 20 September 2014

I presented an arxived paper of my postdoc at the big success Young Bayesian Conference in Vienna. The big picture of the talk is simple: there are situations in Bayesian nonparametrics where you don’t know how to sample from the posterior distribution, but you can only compute posterior expectations (so-called marginal methods). So e.g. you cannot provide credible intervals. But sometimes all the moments of the posterior distribution are available as posterior expectations. So morally, you should be able to say more about the posterior distribution than just reporting the posterior mean. To be more specific, we consider a hazard (h) mixture model

\displaystyle h(t)=\int k(t;y)\mu(dy)

where k is a kernel, and the mixing distribution \mu is random and discrete (Bayesian nonparametric approach).

We consider the survival function S which is recovered from the hazard rate h by the transform

\displaystyle S(t)=\exp\Big(-\int_0^t h(s)ds\Big)

and some possibly censored survival data having survival S. Then it turns out that all the posterior moments of the survival curve S(t) evaluated at any time t can be computed.

The nice trick of the paper is to use the representation of a distribution in a [Jacobi polynomial] basis where the coefficients are linear combinations of the moments. So one can sample from [an approximation of] the posterior, and with a posterior sample we can do everything! Including credible intervals.

I’ve wrapped up the few lines of code in an R package called momentify (not on CRAN). With a sequence of moments of a random variable supported on [0,1] as an input, the package does two things:

  • evaluates the approximate density
  • samples from it

A package example for a mixture of beta and 2 to 7 moments gives that result:


Non-negative unbiased estimators

Posted in Statistics by Pierre Jacob on 13 May 2014
Benedict Cumberbatch having to choose between unbiasedness and non-negativity.

Benedict has to choose between unbiasedness and non-negativity.

Hey hey,

With Alexandre Thiéry we’ve been working on non-negative unbiased estimators for a while now. Since I’ve been talking about it at conferences and since we’ve just arXived the second version of the article, it’s time for a blog post. This post is kind of a follow-up of a previous post from July, where I was commenting on Playing Russian Roulette with Intractable Likelihoods by Mark Girolami, Anne-Marie Lyne, Heiko Strathmann, Daniel Simpson, Yves Atchade.


Parallel resampling in the particle filter

Posted in Statistics by Pierre Jacob on 12 May 2014
Decisions decisions...

Decisions decisions… which resampling to use on the CPU (left), GPU (middle), or between both (right). The y-axis essentially parametrizes the expected variance of the weights (high values = “we observe an outlier” = high variance of the weights).

Hey there,

It’s been a while I haven’t written about parallelization and GPUs. With colleagues Lawrence Murray and Anthony Lee we have just arXived a new version of Parallel resampling in the particle filter. The setting is that, on modern computing architectures such as GPUs, thousands of operations can be performed in parallel (i.e. simultaneously) and therefore the rest of the calculations that cannot be parallelized quickly becomes the bottleneck. In the case of the particle filter (or any sequential Monte Carlo method such as SMC samplers), that bottleneck is the resampling step. The article investigates this issue and numerically compares different resampling schemes.


Rasmus Bååth’s Bayesian first aid

Posted in Project, R, Statistics by Pierre Jacob on 23 January 2014

Besides having coded a pretty cool MCMC app in Javascript, this guy Rasmus Bååth has started the Bayesian first aid project. The idea is that if there’s an R function called blabla.test performing test “blabla”, there should be a function bayes.blabla.test performing a similar test in a Bayesian framework, and showing the output in a similar way so that the user can easily compare both approaches.This post explains it all. Jags and BEST seem to be the two main workhorses under the hood.

Kudos to Rasmus for this very practical approach, potentially very impactful. Maybe someday people will have to specify if they want a frequentist approach and not the other way around! (I had a dream, etc).

Pseudo-Bayes: a quick and awfully incomplete review

Posted in General, Statistics by nicolaschopin on 18 September 2013

You called me a pseudo-Bayesian. Prepare to die.

A recently arxived paper by Pier Bissiri, Chris Holmes and Steve Walker piqued my curiosity about “pseudo-Bayesian” approaches, that is, statistical approaches based on a pseudo-posterior:

\pi(\theta) \propto p(\theta) \hat{L}(\theta)

where \hat{L}(\theta) is some pseudo-likelihood. Pier, Chris and Steve use in particular

\hat{L}(\theta) = \exp\{ - \lambda*R_n(\theta,x) \}

where R_n(\theta,x) is some empirical risk function. A good example is classification; then R_n(\theta,x) could be the proportion of properly classified points:

R_n(\theta,x) = \sum_{i=1}^n \mathbf{I}(y_i\times f_{\theta}(x_i)\geq 0)

where f_{\theta} is some score function parametrised by \theta, and y_i\in\{-1,1\}. (Side note: I find the -1/1 ML convention for the y_i more convenient than the 0/1 stats convention.)

It turns out that this particular kind of pseudo-posterior has already been encountered before, but with different motivations:

  •  Chernozhukov and Hong (JoE, 2003)  used it to define new Frequentist estimators based on moment estimation ideas (i.e. take R_n above to be some empirical moment constraint). Focus is on establishing Frequentist properties of say the expectation of the pseudo-posterior. (It seems to me that few people have heard about this this paper in Stats).
  • the PAC-Bayesian approach which originates from Machine Learning  also relies on this kind of pseudo-posterior. To be more precise, PAC-Bayes usually starts by minimising the upper bound of an oracle inequality within a class of randomised estimators. Then, as a result, you obtain as a possible solution, say, a single draw for the pseudo-posterior defined above.  A good introduction is this book by Olivier Catoni.
  • Finally, Pier, Chris and Steve’s approach is by far the most Bayesian of these three pseudo-Bayesian approaches, in the sense that they try to maintain an interpretation of the pseudo-posterior as a representation on the uncertainty on \theta. Crudely speaking,  they don’t look only at the expectation, like the two approaches aboves, but also at the spread of the pseudo-posterior.

Let me mention briefly that quite a few papers have considered using other types of pseudo-likelihood in a pseudo-posterior, such as empirical likelihood, composite likelihood, and so on, but I will shamefully skip them for now.

To which extent this growing interest in “Pseudo-Bayes” should have an impact on Bayesian computation? For one thing, more problems to throw at our favourite algorithms should be good news. In particular, Chernozhukov and Hong mention the possibility to use MCMC as a big advantage for their approach, because typically the L_n function they consider could be difficult to minimise directly by optimisation algorithms. PAC-Bayesians also seem to recommend MCMC, but I could not find so many PAC-Bayesian papers that go beyond the theory and actually implement it; an exception is this.

On the other hand, these pseudo posteriors might be quite nasty. First, given the way they are defined, they should not have the kind of structure that makes it possible to use Gibbs sampling. Second, many interesting choices for R_n seem to be   irregular or multimodal. Again, in the classification example, the 0-1 loss function is typically not continuous. Hopefully the coming years will witness some interesting research on which computational approaches are more fit for pseudo-Bayes computation, but readers will not be surprised if I put my Euros  on (some form of) SMC!

Tagged with:

Get every new post delivered to your Inbox.

Join 63 other followers

%d bloggers like this: