## Intractable likelihoods, unbiased estimators and sign problem

Hey all,

We’re at the Big Data era blablabla, but the advanced computational methods usually don’t scale well enough to match the increasing sizes of datasets. For instance, even in a simple case of i.i.d. data and an associated likelihood function , the cost of evaluating the likelihood function at any parameter is typically growing at least linearly with . If you then plug that likelihood into an optimization technique to find the Maximum Likelihood Estimate, or into a sampling technique such as Metropolis-Hastings to sample from the posterior distribution, the computational cost grows accordingly for a fixed number of iterations. However you can get unbiased estimates of the log-likelihood by drawing points uniformly in the index set and by computing . This way you sub-sample from the whole dataset, and you can choose according to your computational budget. However is it possible to perform inference with these estimates instead of the complete log-likelihood?

An approach to this problem is provided in a particularly remarkable paper that studies a Metropolis-Hastings algorithm using unbiased estimates of the target density instead of the target density itself, and shows that the result is *correct* in the sense that you get a Markov chain converging towards the exact target distribution. The problem is that it is not so trivial to get unbiased likelihood estimates. In the i.i.d case mentioned above, you get unbiased estimates of the log-likelihood, but of course the unbiased property is lost if you simply take the exponential of your estimates. Abstracting from the statistical context, suppose you have a random variable with expectation . Then doesn’t have expectation , but is it still possible to construct an unbiased estimate of by doing something else? More generally, for any function and not only for the exponential?

This very relevant article appeared recently on arXiv: Playing Russian Roulette with Intractable Likelihoods by Mark Girolami, Anne-Marie Lyne, Heiko Strathmann, Daniel Simpson, Yves Atchade, see also Xian’s comments on it. The motivation of the paper is slightly different than the one described above but the techniques are similar. Let me summarize the main ideas used in that paper. First develop the function into a Taylor series:

and take an infinite triangular array of independent copies of . Then the random variable

has expectation . Of course it is not practical because the series is infinite, so the second idea is about making the series finite. As my new colleague Alex Thiery remarked, if you want to compute an infinite series

you can always draw an integer-valued random variable and compute

since

Combining the Taylor expansion and the random truncation, you can get unbiased estimate of for any function , given the ability to sample independent copies of a random variable with expectation . The mentioned paper deals with more specific functions and different truncation schemes, but essentially it’s the same deal. So is that it? Problem solved?

One big remaining issue with this scheme is that you do not control the sign of the resulting estimates; so that if you are estimating a positive quantity such as a likelihood in the i.i.d. case mentioned above, applying the expansion/truncation scheme does not guarantee that your estimates are going to take almost-surely non-negative values. In the Russian Roulette paper it is not a big issue but it could potentially be one. So let me conclude with this problem which is still open to the best of my knowledge.

Suppose you have a real-valued random variable with expectation and a function . You know how to sample any finite number of independent copies of . Can you construct an almost-surely **non-negative** random variable such that ?

Spoiler alert: the answer is no in general… more about this soon!

Umberto Picchinisaid, on 1 July 2013 at 09:22Thanks for the great post. Papers closely related to the first paragraph of your post are http://arxiv.org/abs/1304.5299 and http://icml.cc/2012/papers/782.pdf

Christan Robert also commented on the first one: http://xianblog.wordpress.com/2013/04/28/austerity-in-mcmc-land/

Pierre Jacobsaid, on 1 July 2013 at 10:23Ah thanks a lot, somehow I had missed the austerity papers!

xi'ansaid, on 1 July 2013 at 16:28Euh…, why the picture?!

Dan Simpsonsaid, on 1 July 2013 at 17:51It’s Taylor Swift!

xi'ansaid, on 1 July 2013 at 17:54Who?!?!

Dan Simpsonsaid, on 1 July 2013 at 18:27She’s a very young country singer from America. Among other things, she has a song called Red which would be genuinely useful if you ever needed to teach the concept of “similes” to someone. As a song, it kinda skates by on charm.

Mark Girolamisaid, on 1 July 2013 at 18:19I fear xi’an this may be a generational issue – I have no idea who Taylor Swift is – nor have I the interest to look her up on Google…. a picture of the be-wigged Colin Maclaurin would have been so much better!! – I really like Pierre’s suggestion about using the ideas of unbiased estimators and Russian Roulette from our paper to perform exact Bayesian / Likelihood based inference in the so-called “BIG DATA” regime – the good thing is that exactness would be preserved which I don’t think is the case in the austerity paper…. would be interesting to see how it actually works in practice — might just take a look.

Dan Simpsonsaid, on 1 July 2013 at 18:31Hmmm… but could you bound the cost? There’d also be a problem with variance explosion if you tried to keep it too low, i would suspect. But it would maybe be very useful to “fix” the “bias” in aggressive lasso pre-selection schemes. It turns out you can work out a priori which covariates will be zero, but the “safe” scheme is too expensive, so people use “lossy” ones. Maybe that could be the “big data” version of “don’t truncate until the kth term”…..

Alex Thierysaid, on 2 July 2013 at 03:58Hi Dan, do you mind to expand on your comment:

‘maybe be very useful to “fix” the “bias” in aggressive lasso pre-selection schemes”

I am intrigued…

Dan Simpsonsaid, on 2 July 2013 at 11:44It’s not much of a thing, really – I was just fee-associating…

It turns out that there are a bunch of pre-selection routines for lasso-type estimators that are necessary in the situation where you have so many covariates that you could never compute the estimate in the standard way (that’s to memory/ linear algebra issues). The first provably “correct” one was called SAFE, but some more aggressive schemes that occasionally drop variables that would otherwise be selected have been proposed (one recently in Series B by a bunch of famous people). So I was thinking in a Bayesian context that you could probably do something similar – instead of truncating a series, you would be truncating a dataset. Instead of the first terms being ranked highest, it would be those with highest pre-selection weights. I don’t think the current ranking technology is good enough for this – as said below, you’d want a lot of control over the quality (some sort of epsilon-alpha bounds would be perfect), but I suspect that there’s some nice work that could be done on this.

(nb – bias was a silly word to use…. lasso is biased, that’s the whole point :p)

Pierre Jacobsaid, on 2 July 2013 at 07:45I’d also be interested to know more, Dan!

xi’an, I admit having trouble sometimes to find suitable pictures for the posts.

Mark, if you try this out I would be interested to see the frequencies of negative values in the likelihood estimates as a function of the size “m” of the sub-sample.

Dan Simpsonsaid, on 2 July 2013 at 11:15So, we didn’t talk about it in the paper, but in some cases you can really really control this. For example, in the log-determinant case, there is some old work on sharp-ish epsilon-alpha bounds (http://www.cs.ucdavis.edu/~bai/publications/baifaheygolub96.pdf). So you could probably balance the expected number of negatives with the cost of the estimator and the variance inflation due to the extra negatives.

Robin Rydersaid, on 5 July 2013 at 11:31Best illustrative photo ever!

Radu Craiusaid, on 13 December 2013 at 16:15Very interesting post!