Statisfaction

Post-doc with me?

Posted in General by nicolaschopin on 18 October 2013

a typical post-doc at CREST

Please note this is a very early, preliminary, non-official announcement, but I understand that our lab might be able to fund a post-doc position next academic year (starting around September 2014). The successful candidate would be expected to interact with a (non-empty!) subset of our Stats group (Arnak Dalayan, Eric Gautier, Judith Rousseau, Alexandre Tsybakov, and me).  In particular, I’d be  interested to hear from anyone who would like to apply in order to interact with me (and maybe other lab members) on things related to Bayesian computation (Sequential Monte Carlo, MCMC, fast approximations, etc), at least partially. I have various projects in mind, but I’m quite flexible and open to discussion. I think that the selection process might occur some time in May-June of next year, but again I don’t have exact details for now.

Singapore –> Oxford

Posted in General by Pierre Jacob on 3 October 2013

Image

A quick post to say that I’m moving from Singapore to Oxford, UK. I will dearly miss my Singaporean colleagues, as well as my morning Laksa and Nasi lemak. I look forward to the skying season though.

I will work for the next two years as a post-doc with Professors Arnaud Doucet and Yee Whye Teh, on sequential Monte Carlo methods for high-dimensional problems.

Former office mate Alex Thiery is still in Singapore and will start blogging here soon, so we’ll still have two continents covered. Still looking for contributors in the other ones!

Clone wars inside the uniform random variable

Posted in General by Pierre Jacob on 25 September 2013

Hello,

In a recent post Nicolas discussed some limitation of pseudo-random number generation. On a related note there’s  a feature of random variables that I find close to mystical.

In an on-going work with Alex Thiery, we had to precisely define the notion of randomized algorithms at some point, and we essentially followed Keane and O’Brien [1994] (as it happens there’s an article today on arXiv that also is related, maybe, or not). The difficulty comes with the randomness. We can think of a deterministic algorithm as a good old function mapping an input space to an output space, but a random algorithm  adds some randomness over a deterministic scheme (in an accept-reject step for instance, or a random stopping criterion), so that given fixed inputs the output might still vary. One way to formalise it consists in defining the algorithm as a deterministic function of inputs and of a source of randomness; that randomness is represented by  a single random variable U e.g. following an uniform distribution.

The funny, mystical and disturbing thing is that a single uniform random variable is enough to represent an infinity of them. It sounds like an excerpt of the Vedas, doesn’t it? To see this, write a single uniform realization in binary representation. That is, for U \in [0,1] write

U = \sum_{k> 0} b_k 2^{-k}

with b_k = \mbox{floor}(2^k U) \mbox{ mod } 2. The binary representation is b_1b_2b_3b_4b_5\ldots

Realization of an uniform r.v. in binary representation

Realization of an uniform random variable in binary representation

Now it’s easy to see that these zeros and ones are distributed as independent Bernoulli variables. Now we put these digits in a particular position, as follows.

Same zeros and ones ordered in an increasing square

Same zeros and ones ordered in a triangle of increasing size

If we take each column or each row from the grid above, they’re independent and they’re also binary representations of uniform random variables – you could also consider diagonals or more funky patterns. You could say that the random variable contains an infinity of independent clones.

This property actually sounds dangerous now, come to think of it. I think it was always well-known but people might not have made the link with Star Wars. In the end I’m happy to stick with harmless pseudo-random numbers, for safety reasons.

Pseudo-Bayes: a quick and awfully incomplete review

Posted in General, Statistics by nicolaschopin on 18 September 2013
Image

You called me a pseudo-Bayesian. Prepare to die.

A recently arxived paper by Pier Bissiri, Chris Holmes and Steve Walker piqued my curiosity about “pseudo-Bayesian” approaches, that is, statistical approaches based on a pseudo-posterior:

\pi(\theta) \propto p(\theta) \hat{L}(\theta)

where \hat{L}(\theta) is some pseudo-likelihood. Pier, Chris and Steve use in particular

\hat{L}(\theta) = \exp\{ - \lambda*R_n(\theta,x) \}

where R_n(\theta,x) is some empirical risk function. A good example is classification; then R_n(\theta,x) could be the proportion of properly classified points:

R_n(\theta,x) = \sum_{i=1}^n \mathbf{I}(y_i\times f_{\theta}(x_i)\geq 0)

where f_{\theta} is some score function parametrised by \theta, and y_i\in\{-1,1\}. (Side note: I find the -1/1 ML convention for the y_i more convenient than the 0/1 stats convention.)

It turns out that this particular kind of pseudo-posterior has already been encountered before, but with different motivations:

  •  Chernozhukov and Hong (JoE, 2003)  used it to define new Frequentist estimators based on moment estimation ideas (i.e. take R_n above to be some empirical moment constraint). Focus is on establishing Frequentist properties of say the expectation of the pseudo-posterior. (It seems to me that few people have heard about this this paper in Stats).
  • the PAC-Bayesian approach which originates from Machine Learning  also relies on this kind of pseudo-posterior. To be more precise, PAC-Bayes usually starts by minimising the upper bound of an oracle inequality within a class of randomised estimators. Then, as a result, you obtain as a possible solution, say, a single draw for the pseudo-posterior defined above.  A good introduction is this book by Olivier Catoni.
  • Finally, Pier, Chris and Steve’s approach is by far the most Bayesian of these three pseudo-Bayesian approaches, in the sense that they try to maintain an interpretation of the pseudo-posterior as a representation on the uncertainty on \theta. Crudely speaking,  they don’t look only at the expectation, like the two approaches aboves, but also at the spread of the pseudo-posterior.

Let me mention briefly that quite a few papers have considered using other types of pseudo-likelihood in a pseudo-posterior, such as empirical likelihood, composite likelihood, and so on, but I will shamefully skip them for now.

To which extent this growing interest in “Pseudo-Bayes” should have an impact on Bayesian computation? For one thing, more problems to throw at our favourite algorithms should be good news. In particular, Chernozhukov and Hong mention the possibility to use MCMC as a big advantage for their approach, because typically the L_n function they consider could be difficult to minimise directly by optimisation algorithms. PAC-Bayesians also seem to recommend MCMC, but I could not find so many PAC-Bayesian papers that go beyond the theory and actually implement it; an exception is this.

On the other hand, these pseudo posteriors might be quite nasty. First, given the way they are defined, they should not have the kind of structure that makes it possible to use Gibbs sampling. Second, many interesting choices for R_n seem to be   irregular or multimodal. Again, in the classification example, the 0-1 loss function is typically not continuous. Hopefully the coming years will witness some interesting research on which computational approaches are more fit for pseudo-Bayes computation, but readers will not be surprised if I put my Euros  on (some form of) SMC!

Tagged with:

the much smaller world of pseudo-random generation

Posted in General by nicolaschopin on 16 September 2013

I start a new course at ENSAE this year, on “Monte Carlo and simulation methods”. I intend to cover pseudo-random generators at the beginning, so I’m thinking about how to teach this material which I’m not so familiar with.

One very naive remark: in a “truly random world”, when I flip a coin n times, I obtain one out of 2^n possible outcomes, with probability 2^{-n}. In the real world, if I use a computer to toss n coins, the number of possible outcomes (for these $n$ successive tosses) is bounded by 2^{32}. This is because a stream of pseudo-random numbers is completely determined by the seed (the starting point of the stream), and most generators are based on 32-bits seeds.

Compare 2^{32} with 2^n when n is large, and you see that PRNG is quite a crude approximation of randomness. Of course, it’s not so bad in practice, because usually you are not interested in the exact value of a vector of n successive coin tosses, but rather at some summary of dimension d\ll 2^{32}. Still, the pseudo-random world is much smaller than the random world it is supposed to mimic.

I found this remark quite scary, and I think I’ll use it to impress on my students the limitations of PRNG. By the way, if you like horror stories on PRNG, you might find  the slides of Régis Lebrun (for a talk at BigMC he gave a few years back) quite entertaining. It was really funny to see the faces of my colleagues turning white as Régis was giving more and more evidence that we are often too confident in PRN generators and oblivious of their limitations. I suspect my own face was very much the same colour.

Tagged with: ,

A newcomer at Statisfaction

Posted in Statistics by nicolaschopin on 7 September 2013

Hi Statisfied readers,

I am Nicolas Chopin,  a Professor of Statistics at the ENSAE, and my colleagues and good friends that manage Statisfaction kindly agreed that I would join their blog. I work mostly on “Bayesian Computation”, i.e. Monte Carlo and non-Monte Carlo methods to compute Bayesian quantities; a strong focus of my research is on Sequential Monte Carlo (aka particle filters).

I don’t plan to blog very regularly, and only on stuff related to my research, at least in some way. Well, that’s the idea for now. Stay tuned!

Nicolas

From SVG to probability distributions [with R package]

Posted in R, Statistics by Pierre Jacob on 25 August 2013

Hey,

To illustrate generally complex probability density functions on continuous spaces, researchers always use the same examples, for instance mixtures of Gaussian distributions or a banana shaped distribution defined on \mathbb{R}^2 with density function:

f(x,y) = \exp\left(-\frac{x^2}{200} - \frac{1}{2}(y+Bx^2-100B)^2\right)

If we draw a sample from this distribution using MCMC we obtain a [scatter]plot like this one:

A sample from the very lame banana shaped distribution

Fig. 1: a sample from the very lame banana shaped distribution

Clearly it doesn’t really look like a banana, even if you use yellow to colour the dots like here. Actually it looks more like a boomerang, if anything. I was worried about this for a while, until I came up with a more realistic banana shaped distribution:

A sample from the realistic banana shaped distribution

Fig. 2: a sample from the realistic banana shaped distribution

See how the shape is well defined compared to the first figure? And there’s even the little tail, that proves so convenient when we want to peel off the fruit. More generally we might want to create target density functions based on general shapes. For this you can now try RShapeTarget, which you can install directly from R using devtools:

library(devtools)
install_github(repo="RShapeTarget", username="pierrejacob")

The package parses SVG files representing shapes, and creates target densities from them. More precisely, a SVG files contains “paths”, which are sequence of points (for instance the above banana is a single closed path). The associated log density at any point x is defined by -1/(2\lambda) \times d(x, P) where P is the closest path of the shape from x and d(x,P) is the distance between the point and the path. The parameter \lambda specifies the rate at which the density decays when the point goes away from the shape. With this you can define the maple leaf distribution, as a tribute to JSM 2013:

Fig. 3: a sample the "O Canada" probability distribution.

Fig. 3: a sample the “O Canada” probability distribution.

In the package you can get a distribution from a SVG file using the following code:

library(RShapeTarget)
# create target from file
my_shape_target <- create_target_from_shape(my_svg_file_name, lambda =1)
# test the log density function on 25 randomly generated points
my_shape_target$logd(matrix(rnorm(50), ncol = 2), my_shape_target$algo_parameters)

Since characters are just a bunch of paths, you can also define distributions based on words, for instance:

Fig. 5: Hodor.

Hodor: Hodor.

which is done as follows (warning you’re only allowed a-z and A-Z, no numbers no space no punctuation for now):

library(RShapeTarget)
word_target <- create_target_from_word("Hodor")

For the words, I defined the target density function as before, except that it’s constant on the letters: so if a point is outside a letter its density is computed based on the distance to the nearest path; if it’s inside a letter it’s just constant, so that the letters are “filled” with some constant density. I thought it’d look better.

Now I’m not worried about the banana shaped distribution any more, but by the fact that the only word I could think of was “Hodor” (with whom you can chat over there).

Joint Statistical Meeting 2013

Posted in General, Seminar/Conference, Statistics by Pierre Jacob on 23 July 2013
A typical statistical meeting.

A typical statistical meeting.

Hey,

In a few weeks (August 3-8) I’ll attend the Joint Statistical Meeting in Montréal, Canada. According to Wikipedia it’s been held every year since 1840 and now gathers more than 5,000 participants!

I’ll talk in a session organized by Scott Schmidler, entitled Adaptive Monte Carlo Methods for Bayesian Computation; you can find the session programme here [online program]. I’ll talk about score and Fisher observation matrix estimation in state-space models.

According to the rumour and Christian’s reflections on the past years (2009, 2010, 2011), I should prepare my schedule in advance to really enjoy this giant meeting. So if you want to meet there, please send me an e-mail!

See you in Montréal!

Path storage in the particle filter

Posted in Statistics by Pierre Jacob on 12 July 2013
Typical ancestry tree generated by a particle filter

Typical ancestry tree generated by a particle filter

Hey particle lovers,

With Lawrence Murray and Sylvain Rubenthaler we looked at how to store the paths in the particle filter, and the related expected memory cost. We just arXived a technical report about it. Would you like to know more?

(more…)

Intractable likelihoods, unbiased estimators and sign problem

Posted in Statistics by Pierre Jacob on 1 July 2013
Computing a Taylor expansion with random truncation can be done Swiftly.

Computing a Taylor expansion with random truncation can be done Swiftly.

Hey all,

We’re at the Big Data era blablabla, but the advanced computational methods usually don’t scale well enough to match the increasing sizes of datasets. For instance, even in a simple case of i.i.d. data y_1, y_2, \ldots y_n and an associated likelihood function \mathcal{L}(\theta; y_1, y_2, \ldots, y_n), the cost of evaluating the likelihood function at any parameter \theta is typically growing at least linearly with n. If you then plug that likelihood into an optimization technique to find the Maximum Likelihood Estimate, or into a sampling technique such as Metropolis-Hastings to sample from the posterior distribution, the computational cost grows accordingly for a fixed number of iterations. However you can get unbiased estimates of the log-likelihood by drawing m < n points i_1, \ldots, i_m uniformly in the index set \{1, \ldots, n\} and by computing (n/m) \log \mathcal{L}(\theta; y_{i_1}, \ldots, y_{i_m}). This way you sub-sample from the whole dataset, and you can choose m according to your computational budget. However is it possible to perform inference with these estimates instead of the complete log-likelihood?

(more…)

Follow

Get every new post delivered to your Inbox.

Join 51 other followers

%d bloggers like this: