Arnaud Doucet, Sylvain Rubenthaler and I have just put a technical report on arXiv about estimating the first- and second-order derivatives of the log-likelihood (also called the score and the observed information matrix respectively) in general (intractable) statistical models, and in particular in (non-linear non-Gaussian) state-space models. We call them “derivative-free” estimates because they can be computed even if the user cannot compute any kind of derivatives related to the model (as opposed to e.g. this paper and this paper). Actually in some cases of interest we cannot even evaluate the log-likelihood point-wise (we do not have a formula for it), so forget about explicit derivatives. Would you like to know more?
and of course Happy New Year (2013 is the international year of statistics!).
Last week the ISBA Regional Meeting was held in Banaras / Varanasi, in the North of India. The conference was well attended, with leading figures such as Jayanta K. Ghosh, José Bernardo, James Berger, Peter Green, Christian Robert who blogged about it, and an overall ~350 participants.
With Robin Ryder we wrote a paper titled The Wang-Landau Algorithm Reaches the Flat Histogram in Finite Time and it has been accepted in Annals of Applied Probability (arXiv preprint here). I’m especially happy about it since it was the last remaining unpublished chapter of my PhD thesis. In this post I’ll try to explain what we proved here on a simple example.
What do you do when you see the word “condom” in the title of a new arXiv entry?! You click with wild excitement of course! And you end up reading
At Statisfaction’s headquarters (located inside a volcanic crater on a distant planet), we received an email from Jeffrey Myers from the American Statistical Association to advertise the International Year of Statistics, 2013!
To quote the webpage:
The goals of Statistics2013 include:
- increasing public awareness of the power and impact of Statistics on all aspects of society;
- nurturing Statistics as a profession, especially among young people; and
- promoting creativity and development in the sciences of Probability and Statistics
Those are great goals that we obviously support! Statistics is an important field of applied mathematics and has been for a while now, but public awareness still has to increase. At cocktail parties, it still isn’t super sexy to admit that you’re a statistician. It should be! And it’s good that some people are working on that at Amstat, at Tumblr, at NYTimes, at Rstudio and elsewhere.
We’ll go on blogging here, maybe with new contributors and more technical posts shortly. Stay tuned!
Hi folks !
Last Tuesday a seminar on Bayesian procedure for inverse problems took place at CREST. We had time for two presentations of young researchers Bartek Knapik and Kolyan Ray. Both presentations deal with the problem of observing a noisy version of a linear transform of the parameter of interest
where is a linear operator and a Gaussian white noise. Both presentations considered asymptotic properties of the posterior distribution (Their papers can be found on arxiv, here for Bartek’s, and here for Kolyan’s). There is a wide literature on asymptotic properties of the posterior distribution in direc models. When looking at the concentration of toward a true distribution given the data, with respect to some distance , well known problem is to derive concentration rates, that is the rate such that
For inverse problems, the usual methods as introduced by Ghosal, Ghosh and van der Vaart (2000) usually fails, and thus results in this settings are in general difficult to obtain.
Bartek presented some very refined results in the conjugate case. He manages to get some results on the concentration rates of the posterior distribution, on Bayesian Credible Sets and Bernstein – Von Mises theorems – that states that the posterior is asymptotically Gaussian – when estimating a linear functional of the parameter of interest. Kolyan got some general conditions on the prior to achieve concentration rate, and prove that these techniques leads to optimal concentration rates for classical models.
I only knew little about inverse problems but both talks were very accessible and I will surely get more involved in this field !
On this useful series of posts from Freakonometrics:
I stumbled upon this 1996 article published in Ecological Applications:
It was a really fun and surprising read to me, so I felt like sharing. Most surprising was the argument that established Frequentism had a better track record than Bayesian stats. What a weird remark from a researcher! Hopefully the atmosphere among ecologists changed since 1996 (and people learned about Bayesian model choice), but I think that such articles explains why experienced Bayesian statisticians spend time writing replies like “Not only defended but also applied”: The perceived absurdity of Bayesian inference and the recently-arXived anti-Bayesian moment and its passing for instance.
for the next GTB meeting at Crest, 3rd May, I will present Peter Orbanz‘ work on Projective limit random probabilities on Polish spaces. It will follow my previous presentation about Bayesian nonparametrics on the Dirichlet process.
The article provides a means of constructing any arbitrary prior distribution on the set of probability measures by working on its finite-dimensional marginals. The vanilla example is the Dirichlet process, which is characterized by its Dirichlet distribution marginals on any finite partition of the space (other examples are the Normalized Inverse Gaussian Process and the Pòlya Tree). The figure above illustrates the projective property of the marginals.
Peter will speak at ISBA 2012 Kyoto session : On the uses of random probabilities in Bayesian inference, along with Ramses Mena and Antonio Lijoi. I’ll write more about that later on!
Following Pierre’s post on psycho dice, I want here to see by which average margin repeated plays might be called influenced by mind will. The rules are the following (exerpt from the novel Midnight in the Garden of Good and Evil, by John Berendt):
You take four dice and call out four numbers between one and six–for example, a four, a three, and two sixes. Then you throw the dice, and if any of your numbers come up, you leave those dice standing on the board. You continue to roll the remaining dice until all the dice are sitting on the board, showing your set of numbers. You’re eliminated if you roll three times in succession without getting any of the numbers you need. The object is to get all four numbers in the fewest rolls.
Simplify the game by forgetting the elimination step. Suppose first one plays with an even dice of 1/p faces. The probability of it to show the right face is p (for somebody with no psy power). Denote X the time to first success with one dice, which follows, by independence, a geometric distribution Geom(p) (with the starting-to-1 convention). X has the following probability mass and cumulative distribution functions, with q=1-p:
Now denote Y the time to success in the game with n dice. This simultaneous case is the same as playing n times independently with 1 dice, and then taking Y as the sample maximum of the different times to success. So Y‘s cdf is
Its pmf can be obtained either exactly by difference, or up to a normalizing constant C by differentiation:
As it is not too far from the Geom(p) pmf, one can use the latter as the proposal in a Monte Carlo estimate. If ‘s are N independent Geom(p) variables, then
The following R lines produce the estimates and .
Now it is possible to use a test (from classical test theory) to estimate the average margin with which repeated games should deviate in order to detect statistical evidence of psy power. We are interested in testing against , for repeated plays.
If the game is played k times, then one rejects if the sampled mean is less than , where is the 95% standard normal quantile. To indicate the presence of a psy power, someone playing times should perform in 2 rolls less than the predicted value (in 1 roll less if playing times). I can’t wait, I’m going to grab a dice!
A quick post on a one-day seminar on Monte Carlo methods for inverse problems in image and signal processing, that will take place at Telecom ParisTech on Tuesday, November 15th. Details and abstracts are on the seminar’s webpage:
(for English-reading people, here is a google translated version). The seminar is organised by Gersende Fort, from Telecom and CNRS and the program looks very interesting, the topics are varied and fairly methodological. The webpage is in French but I think the talks are going to be in English, since there will be English-speaking people in the audience. I’m very happy to participate by presenting the Parallel Adaptive Wang Landau algorithm I’ve been blogging about lately, and Christian Robert is going to present our parallel Independent Metropolis-Hastings paper, so I can’t wait to getting more feedback on both.
See you on Tuesday?