## Bayesian classics

This week I’ll start my Bayesian Statistics master’s course at the Collegio Carlo Alberto. I realized that some of last year students got PhD positions in prestigious US universities. So I thought that letting this year’s students have a first grasp of some great Bayesian papers wouldn’t do harm. The idea is that in addition to the course, the students will pick a paper from a list and present it (or rather part of it) to the others and to me. Which will let them earn some extra points for the final exam mark. It’s in the spirit of Xian’s Reading Classics Seminar (his list here).

I’ve made up the list below, inspired by two textbooks references lists and biased by personal tastes: Xian’s Bayesian Choice and Peter Hoff’s First Course in Bayesian Statistical Methods. See the pdf list and zipped folder for papers. Comments on the list are much welcome!

Julyan

PS: reference n°1 isn’t a joke!

## momentify R package at BAYSM14

I presented an arxived paper of my postdoc at the big success Young Bayesian Conference in Vienna. The big picture of the talk is simple: there are situations in Bayesian nonparametrics where you don’t know how to sample from the posterior distribution, but you can only compute posterior expectations (so-called *marginal methods*). So e.g. you cannot provide credible intervals. But sometimes all the moments of the posterior distribution are available as posterior expectations. So morally, you should be able to say more about the posterior distribution than just reporting the posterior mean. To be more specific, we consider a hazard (h) mixture model

where is a kernel, and the mixing distribution is random and discrete (Bayesian nonparametric approach).

We consider the survival function which is recovered from the hazard rate by the transform

and some possibly censored survival data having survival . Then it turns out that all the posterior moments of the survival curve evaluated at any time can be computed.

The nice trick of the paper is to use the representation of a distribution in a [Jacobi polynomial] basis where the coefficients are linear combinations of the moments. So one can sample from [an approximation of] the posterior, and with a posterior sample we can do everything! Including credible intervals.

I’ve wrapped up the few lines of code in an R package called momentify (not on CRAN). With a sequence of moments of a random variable supported on [0,1] as an input, the package does two things:

- evaluates the approximate density
- samples from it

A package example for a mixture of beta and 2 to 7 moments gives that result:

## Using R in LaTeX with knitr and RStudio

I presented today at INSEE R user group (FLR) how to use **knitr** (Sweave evolution) for writing documents which are self contained with respect to the source code: your data changed? No big deal, just compile your .Rnw file again and you are done with an updated version of your paper![Ctrl+Shift+I] is easy. Some benefits with respect to having two separate .R and .tex files: it is integrated in a single software (RStudio), you can call variables in your text with the \Sexpr{} command. The slow speed at compilation is no more a real matter as one can put “cache=TRUE” in code chunk options not to reevaluate unchanged chunks, which fastens things.

I share the (brief) slides below. They won’t help much those who already use knitr, but they give the first steps for those who would like to give it a try.

## Dropbox Space Race

Hi,

additionally to the referral program (you refer a new user, you win an extra .5 Go), the Dropbox Space Race will give you 3 Go extra space (for 2 years) if you register with your email from a competing university. The best schools will get more space. Here are the 100 top schools. Com’ on, there is no french school in the 100 top !

Thanks Nicolas for the info.

## Next R meeting in Paris INSEE: ggplot2 and parallel computing

Hi,

our group of R users from INSEE, aka FLR, meets monthly in Paris. Next meeting is on Wed 13 (tomorrow), 1-2 pm, room 539 (an ID is needed to come in, map to access INSEE R), about ggplot2 and parallel computing. Since the first meeting in February, presentations have included *hot* topics like webscrapping, C in R, RStudio, SQLite databases or cartography (most of them in French). See you there!

## Priors on probability measures

Hi,

for the next GTB meeting at Crest, 3rd May, I will present Peter Orbanz‘ work on Projective limit random probabilities on Polish spaces. It will follow my previous presentation about Bayesian nonparametrics on the Dirichlet process.

The article provides a means of constructing any arbitrary prior distribution on the set of probability measures by working on its finite-dimensional marginals. The vanilla example is the Dirichlet process, which is characterized by its Dirichlet distribution marginals on any finite partition of the space (other examples are the Normalized Inverse Gaussian Process and the Pòlya Tree). The figure above illustrates the projective property of the marginals.

Peter will speak at * ISBA 2012 Kyoto *session* : **On the uses of random probabilities in Bayesian inference*, along with Ramses Mena and Antonio Lijoi. I’ll write more about that later on!

## A world without referees

In an *invited contribution* to the last ISBA Bulletin, Larry Wasserman discusses the “almost 350 years old” peer review system (paper). Have a look on it, it’s quite thought provoking!

We should think about our field like a marketplace of ideas. Everyone should be free to put their ideas out there. There is no need for referees. Good ideas will get recognized, used and cited. Bad ideas will be ignored. This process will be imperfect. But is it really better to have two or three people decide the fate of your work?

A world where you put your work on arXiv or on your web page, where you save so much time, isn’t it tempting?

## Rochebrune Workshop 2012

Hey,

Last week I attended Rochebrune workshop for the second time. The genius organizers’ idea (Liliane Bel and Eric Parent from AgroParisTech, Jean-Jacques Borreux from Liège University) is to mix *ski, stats and spirits* (mostly Genepi and Chartreuse) around a remote alpine chalet on top of Megève ski resort.

Most of the attendees are (young) Bayesians working in applied fields, ranging from biology, ecology and epidemiology, to meteorology and climatology. We had great talks about fishes, trees, birds (Joël’s busard cendré), drugs and avalanches. More methodological talks dealt with extremes, Bayesian model averaging, and simulations: variational approximations, INLA, ABC, and MCMC in general. We had a tutorial about JAGS and WinBugs/OpenBugs as well (and how to interface them with R using rjags and R2WinBUGS). I presented my work about Multidimensional covariate dependent Dirichlet processes (all presentations here).

In addition to the 10 talks per day, a sacrosanct 5-hour skiing slot was reserved in the afternoon, with lessons from crazy Mégevan instructors. They must be really good: Pierre, don’t be afraid, I jumped and felt significantly less than two years ago. Have a Chartreuse, cheers!

## Valentine Day and lonely people in France

Insee published recently a paper (in French), well in line with the Valentine Day, which characterizes people living alone or in couple by socio-professional category, along with the data.

Between 1990 and 2008 (two population surveys), the proportion of people living alone mostly increased for people under 60. After 60, 38% of women live alone, for only 17% of men, because women are married to older men, and live longer than them, in average. See that proportion by age:

Spatially, there is a kind of South/North opposition. During the working life, lonely people live in the South (left), while lonely retired people live in the North (right), with an exception for Île-de-France (Paris) with a high proportion whatever the age:

## Daily casualties in Syria

Every new day brings its statistics of new deaths in Syria… Here is an attempt to learn about the Syrian uprising by the figures. Data vary among sources: the Syrian opposition provides the number of casualties by day (here on Dropbox), updated on 8 February 2012, with a total exceeding 8 000.

We note first that the attacks accelerate, as the cumulated graph is mostly convex (click to enlarge):

Plotting the numbers by day shows the bloody situation of Fridays, a gathering day in the Muslin calendar. This point was especially true at the beginning of the uprising, but lately any other day can be equally deadly:

There are almost twice as much deaths on Fridays as any other day in average:

Here are boxplots for the logarithm of daily casualties by day of the week:

and their density estimates, first coloured by day of the week, then by Friday vs rest of the week:

Here is the code (with clumsy parts for fitting the data frames for ggplot, do not hesitate to comment on it)

library(ggplot2) input=read.csv("http://dl.dropbox.com/u/1391912/Blog%20statisfaction/data/syria.txt", sep="\t",header=TRUE,stringsAsFactors=FALSE) input$LogicalFriday=factor(input$WeekDay =="Friday",levels = c(FALSE, TRUE), labels = c("Not Friday", "Friday")) input$Date=as.Date(input$History,"%d/%m/%Y") input$WeekDays=factor(input$WeekDay, levels=unique(as.character(input$WeekDay[7:13]))) # trick to sort the legend qplot(x=Date,y=cumsum(Number), data=input, geom="line",color=I("red"),xlab="",ylab="",lwd=I(1)) qplot(x=as.factor(Date),y=Number, data=input, geom="bar",fill=LogicalFriday,xlab="",ylab="") qplot(log(Number+1), data=input, geom="density",fill=LogicalFriday,xlab="",ylab="",alpha=I(.2)) qplot(log(Number+1), data=input, geom="density",fill=WeekDay,xlab="",ylab="",alpha=I(.2)) qplot(WeekDays,log(Number+1),data=input,geom="boxplot",xlab="",ylab="",colour=WeekDays)

2comments