Hello there !
While I was in Amsterdam, I took the opportunity to go and work with the Leiden crowd, an more particularly with Stéphanie van der Pas and Johannes Schmidt-Heiber. Since Stéphanie had already obtained neat results for the Horseshoe prior and Johannes had obtained some super cool results for the spike and slab prior, they were the fist choice to team up with to work on sparse models. And guess what ? we have just ArXived a paper in which we study the sparse Gaussian sequence
where only a small number of are non zero.
There is a rapidly growing literature on shrinking priors for such models, just look at Polson and Scott (2012), Caron and Doucet (2008), Carvalho, Polson, and Scott (2010) among many, many others, or simply have a look at the program of the last BNP conference. There is also an on growing literature on theoretical properties of some of these priors. The Horseshoe prior was studied in Pas, Kleijn, and Vaart (2014), an extention of the Horseshoe was then study in Ghosh and Chakrabarti (2015), and recently, the spike and slab Lasso was studied in Rocková (2015) (see also Xian ’Og)
All these results are super nice, but still we want to know why do some shinking priors shrink so well and others do not?! As we are all mathematicians here, I will reformulate this last question: What would be the conditions on the prior under which the posterior contracts at the minimax rate1 ?
We considered a Gaussian scale mixture prior on the sequence
since this family of priors encomparse all the ones studied in the papers mentioned above (and more), so it seemed to be general enough.
Our main contribution is to give conditions on such that the posterior converge at the good rate. We showed that in order to recover the parameter that are non-zeros, the prior should have tails that decays at most exponentially fast, which is similar to the condition impose for the Spike and Slab prior. Another expected condition is that the prior should put enough mass around 0, since our assumption is that the vector of parameter is nearly black i.e. most of its components are 0.
More surprisingly, in order to recover 0 parameters correctly, one also need some conditions on the tail of the prior. More specifically, the prior’s tails cannot be too big, and if they are, we can then construct a prior that puts enough mass near 0 but which does not concentrate at the minimax rate.
We showed that these conditions are satisfied for many priors including the Horseshoe, the Horseshoe+, the Normal-Gamma and the Spike and Slab Lasso.
The Gaussian scale mixture are also quite simple to use in practice. As explained in Caron and Doucet (2008) a simple Gibbs sampler can be implemented to sample from the posterior. We conducted simulation study to evaluate the sharpness of our conditions. We computed the loss for the Laplace prior, the global-local scale mixture of gaussian (called hereafter bad prior for simplicity), the Horseshoe and the Normal-Gamma prior. The first two do not satisfy our condition, and the last two do. The results are reported in the following picture.
As we can see, priors that do and do not satisfy our condition show different behaviour (it seems that the priors that do not fit our conditions have a risk larger than the minimax rate of a factor of ). This seems to indicate that our conditions are sharp.
At the end of the day, our results expands the class of shrinkage priors with theoretical guarantees for the posterior contraction rate. Not only can it be used to obtain the optimal posterior contraction rate for the horseshoe+, the inverse-Gaussian and normal-gamma priors, but the conditions provide some characterization of properties of sparsity priors that lead to desirable behaviour. Essentially, the tails of the prior on the local variance should be at least as heavy as Laplace, but not too heavy, and there needs to be a sizable amount of mass around zero compared to the amount of mass in the tails, in particular when the underlying mean vector grows to be more sparse.
Caron, François, and Arnaud Doucet. 2008. “Sparse Bayesian Nonparametric Regression.” In Proceedings of the 25th International Conference on Machine Learning, 88–95. ICML ’08. New York, NY, USA: ACM.
Carvalho, Carlos M., Nicholas G. Polson, and James G. Scott. 2010. “The Horseshoe Estimator for Sparse Signals.” Biometrika 97 (2): 465–80.
Ghosh, Prasenjit, and Arijit Chakrabarti. 2015. “Posterior Concentration Properties of a General Class of Shrinkage Estimators Around Nearly Black Vectors.”
Pas, S.L. van der, B.J.K. Kleijn, and A.W. van der Vaart. 2014. “The Horseshoe Estimator: Posterior Concentration Around Nearly Black Vectors.” Electron. J. Stat. 8: 2585–2618.
Polson, Nicholas G., and James G. Scott. 2012. “Good, Great or Lucky? Screening for Firms with Sustained Superior Performance Using Heavy-Tailed Priors.” Ann. Appl. Stat. 6 (1): 161–85.
Rocková, Veronika. 2015. “Bayesian Estimation of Sparse Signals with a Continuous Spike-and-Slab Prior.”
- For those wondering why the heck with minimax rate here, just remember that a posterior that contracts at the minimax rate induces an estimator which converge at the same rate. It also gives us that confidence region will not be too large.↩
while everyone was away in July, James Ridgway and I posted our “leave (the) pima paper alone” paper on arxiv, in which we discuss to which extent probit/logit regression and not too big datasets (such as the now famous Pima Indians dataset) constitute a relevant benchmark for Bayesian computation.
The actual title of the paper is “Leave Pima Indians alone…”, but xian changed it to “Leave *the* Pima Indians alone…” when discussing it on his blog. Any opinion on whether it does sound better with “the”?
On a different note, one of our findings is that Expectation-Propagation works wonderfully for such models; yes it is an approximate method, but it is very fast, and the approximation error is consistently negligible on all the datasets we looked at.
James has just posted on CRAN the EPGLM package, which computes an EP approximation of the posterior of a logit or probit model. The documentation is a bit terse at the moment, but it is very straightforward to use.
Comments on the package, the paper, its grammar or Pima Indians are most welcome!
This very fine title quotes a pretty hilarious banquet speech by David Dunson at the last BNP conference held in Raleigh last June. The graph is by François Caron who used it in his talk there. See below for his explanation.
After the summer break, back to work. The academic year to come looks promising from a BNP point of view. Not least that three special issues have been announced, in Statistics & Computing (guest editors: Tamara Broderick (MIT), Katherine Heller (Duke), Peter Mueller (UT Austin)), the Electronic Journal of Statistics (guest editor: Subhashis Ghoshal (NCSU)), and in the International Journal of Approximate Reasoning (proposal deadline December 1st, guest editors: Alessio Benavoli (Lugano), Antonio Lijoi (Pavia) and Antonietta Mira (Lugano)).
BNP is also going to infiltrate MCMSki V, Lenzerheide, Switzerland, January 4-7 2016, with three sessions with a BNP flavor, in addition to plenary speakers David Dunson and Michael Jordan. The International Society for Bayesian Analysis World Meeting, 13 -17 June, 2016, should also host plenty of BNP sessions. And a De Finetti Lecture by Persi Diaconis (Stanford University). (more…)
With colleagues Stefano Favaro and Bernardo Nipoti from Turin and Yee Whye Teh from Oxford, we have just arXived an article on discovery probabilities. If you are looking for some info on a space shuttle, a cycling team or a TV channel, it’s the wrong place. Instead, discovery probabilities are central to ecology, biology and genomics where data can be seen as a population of individuals belonging to an (ideally) infinite number of species. Given a sample of size , the -discovery probability is the probability that the next individual observed matches a species with frequency in the -sample. For instance, the probability of observing a new species is key for devising sampling experiments.
By the way, why Alan Turing? Because with his fellow researcher at Bletchley Park Irving John Good, starred in The Imitation Game too, Turing is also known for the so-called Good-Turing estimator of the discovery probability
which involves , the number of species with frequency in the sample (ie frequencies frequency, if you follow me). As it happens, this estimator defined in Good 1953 Biometrika paper became wildly popular among ecology-biology-genomics communities since then, at least in the small circles where wild popularity and probability aren’t mutually exclusive.
Simple explicit estimators of discovery probabilities in the Bayesian nonparametric (BNP) framework of Gibbs-type priors were given by Lijoi, Mena and Prünster in a 2007 Biometrika paper. The main difference between the two estimators of is that Good-Turing involves and only, while the BNP involves , (instead of ), and , the total number of observed species. It has been shown in the literature that the BNP estimators are more reliable than Good-Turing estimators.
How do we contribute? (i) we describe the posterior distribution of the discovery probabilities in the BNP model, which is pretty useful for deriving exact credible intervals of the estimates, and (ii) we investigate large asymptotic behavior of the estimators.
Hi there !
Unfortunately this post is indeed about statistics…
If you are randomly walking around the statistics blogs, you probably have certainly heard of this new language called Julia. It is said by the developers to be as easy to write as R and as fast as C (!) which is quite a catchy way of selling their work. After talking with a Julia enthusiastic user in Amsterdam, I decided to give it a try. And here I am sharing my first impressions.
Fist thing first, the installation is as easy as any other language, plus there is a neat Package management that allows you to get started quite easily. In this respect it is very similar to R.
On the minus side I became a big fan of RStudio Julian (… oupsy Julyan) told you about a long time ago. These kind of programs really make your life easier. I thus tried Juno which turned out to be cumbersome and terribly slow. I would have loved to have an IDE for Julia that would be up to the RStudio standard. Nevermind.
No lets talk a little about what is really interesting : “Is their catch phrase false advertising or not?!”.
There is a bunch of relatively good tutorials online which are really helpful to learn the basic vocabulary, but indeed if like me you are use to code in R and/or Python, you should get it pretty fast and can almost copy-paste your favourite code into Julia and with a few adjustments, it will work. So as easy to write as R : quite so.
I then tried to compare computational times for some of my latest codes and there came the good surprise ! A code that would take a handful of minutes to run in R mainly due to unavoidable loops took a couple of seconds to run in Julia, without any other sorts of optimization. The handling of big objects is smooth and I did not ran into memory problems that R was suffering from.
So far so good ! But of course there has to be some drawbacks. The first one is the poor package repository compare to CRAN or even what you can get for Python. This might of course improve in the next few years as the language is still quite new. However, it is bothering to have to re-code something when you are used to simply load a package in R. Another, probably less important problem, is the lack of data visualization methods and especially the absence of ggplot2 that we have grown quite found of around here. There is of course Gadfly, which is quite close but once again, it is up to now very limited compared to what I was used to…
All in all, I am happy to have tried Julia, and I am quite sure that I will be using it quite a lot from now on. However, even if from a efficiency point of view, it is great, and it is way easier to learn than C (which I should have done a while ago), R and its tremendous package repository is far from beaten.
Oh and by the way, it uses PyPlot based on MatplotLib that allow you to make some xkcd-like plots, which can make your presentations a lot more fun.
I have just arXived a review article, written for ESAIM: Proceedings and Surveys, called Sequential Bayesian inference for implicit hidden Markov models and current limitations. The topic is sequential Bayesian estimation: you want to perform inference (say, parameter inference, or prediction of future observations), taking into account parameter and model uncertainties, using hidden Markov models. I hope that the article can be useful for some people: I have tried to stay at a general level, but there are more than 90 references if you’re interested in learning more (sorry in advance for not having cited your article on the topic!). Below I’ll comment on a few points.
The students did a great job in presenting some Bayesian classics. I enjoyed reading the papers (pdfs can be found here), most of which I hadn’t read before, and enjoyed also the students’ talks. I share here some of the best ones, as well as some demonstrative excerpts from the papers. In chronological order (presentations on slideshare below):
- W. Keith Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1):97–109, 1970.
In this paper, we shall consider Markov chain methods of sampling that are generalizations of a method proposed by Metropolis et al. (1953), which has been used extensively for numerical problems in statistical mechanics.
- Dennis V. Lindley and Adrian F.M. Smith. Bayes estimates for the linear model. Journal of the Royal Statistical Society: Series B (Statistical Methodology), with discussion, 1–41, 1972.
From Prof. B. de Finetti discussion (note the valliant collaborator Smith!):
I think that the main point to stress about this interesting and important paper is its significance for the philosophical questions underlying the acceptance of the Bayesian standpoint as the true foundation for inductive reasoning, and in particular for statistical inference. So far as I can remember, the present paper is the first to emphasize the role of the Bayesian standpoint as a logical framework for the analysis of intricate statistical situation. […] I would like to express my warmest congratulations to my friend Lindley and his valiant collaborator Smith.
Xian blogged recently on the incoming RSS read paper: Statistical Modelling of Citation Exchange Between Statistics Journals, by Cristiano Varin, Manuela Cattelan and David Firth. Following the last JRSS B read paper by one of us! The data that are used in the paper (and can be downloaded here) are quite fascinating for us, academics fascinated by academic rankings, for better or for worse (ironic here). They consist in cross citations counts for 47 statistics journals (see list and abbreviations page 5): is the number of citations from articles published in journal in 2010 to papers published in journal in the 2001-2010 decade. The choice of the list of journals is discussed in the paper. Major journals missing include Bayesian Analysis (published from 2006), The Annals of Applied Statistics (published from 2007).
I looked at the ratio of Total Citations Received by Total Citations made. This is a super simple descriptive statistic which happen to look rather similar to Figure 4 which plots Export Scores from Stigler model (can’t say more about it, I haven’t read in detail). The top five is the same modulo the swap between Annals of Statistics and Biometrika. Of course a big difference is that the Cited/Citation ratio isn’t endowed with a measure of uncertainty (below, left is my making, right is Fig. 4 in the paper).
I was surprised not to see a graph / network representation of the data in the paper. As it happens I wanted to try the gephi software for drawing graphs, used for instance by François Caron and Emily Fox in their sparse graphs paper. I got the above graph, where:
- for the data, I used the citations matrix renormalized by the total number of citations made, which I denote by . This is a way to account for the size (number of papers published) of the journal. This is just a proxy though since the actual number of papers published by the journal is not available in the data. Without that correction, CSDA is way ahead of all the others.
- the node size represents the Cited/Citing ratio
- the edge width represents the renormalized . I’m unsure of what gephi does here, since it converts my directed graph into an undirected graph. I suppose that it displays only the largest of the two edges and .
- for a better visibility I kept only the first decile of heaviest edges.
- the clusters identified by four colors are modularity classes obtained by the Louvain method.
The two software journals included in the dataset are quite outliers:
- the Journal of Statistical Software (JSS) is disconnected from the others, meaning it has no normalized citations in the first decile. Except from its self citations which are quite big and make it the 4th Impact Factor from the total list in 2010 (and apparently the first in 2015).
- the largest is the self citations of the STATA Journal (StataJ).
- CSDA is the most central journal in the sense of the highest (unweighted) degree.
Some further thoughts
All that is just for the fun of it. As mentioned by the authors, citation counts are heavy-tailed, meaning that just a few papers account for much of the citations of a journal while most of the papers account for few citations. As a matter of fact, the total of citations received is mostly driven by a few super-cited papers, and also is the Cited/Citations matrix that I use throughout for building the graph. A reason one could put forward about why JRSS B makes it so well is the read papers: for instance, Spiegelhalter et al. (2002), DIC, received alone 11.9% of all JRSS B citations in 2010. Who’d bet the number of citation this new read paper (JRSS A though) will receive?
This week I’ll start my Bayesian Statistics master’s course at the Collegio Carlo Alberto. I realized that some of last year students got PhD positions in prestigious US universities. So I thought that letting this year’s students have a first grasp of some great Bayesian papers wouldn’t do harm. The idea is that in addition to the course, the students will pick a paper from a list and present it (or rather part of it) to the others and to me. Which will let them earn some extra points for the final exam mark. It’s in the spirit of Xian’s Reading Classics Seminar (his list here).
I’ve made up the list below, inspired by two textbooks references lists and biased by personal tastes: Xian’s Bayesian Choice and Peter Hoff’s First Course in Bayesian Statistical Methods. See the pdf list and zipped folder for papers. Comments on the list are much welcome!
PS: reference n°1 isn’t a joke!