Former office mate Alex Thiery is still in Singapore and will start blogging here soon, so we’ll still have two continents covered. Still looking for contributors in the other ones!
In a recent post Nicolas discussed some limitation of pseudo-random number generation. On a related note there’s a feature of random variables that I find close to mystical.
In an on-going work with Alex Thiery, we had to precisely define the notion of randomized algorithms at some point, and we essentially followed Keane and O’Brien  (as it happens there’s an article today on arXiv that also is related, maybe, or not). The difficulty comes with the randomness. We can think of a deterministic algorithm as a good old function mapping an input space to an output space, but a random algorithm adds some randomness over a deterministic scheme (in an accept-reject step for instance, or a random stopping criterion), so that given fixed inputs the output might still vary. One way to formalise it consists in defining the algorithm as a deterministic function of inputs and of a source of randomness; that randomness is represented by a single random variable e.g. following an uniform distribution.
The funny, mystical and disturbing thing is that a single uniform random variable is enough to represent an infinity of them. It sounds like an excerpt of the Vedas, doesn’t it? To see this, write a single uniform realization in binary representation. That is, for write
with . The binary representation is
Now it’s easy to see that these zeros and ones are distributed as independent Bernoulli variables. Now we put these digits in a particular position, as follows.
If we take each column or each row from the grid above, they’re independent and they’re also binary representations of uniform random variables – you could also consider diagonals or more funky patterns. You could say that the random variable contains an infinity of independent clones.
This property actually sounds dangerous now, come to think of it. I think it was always well-known but people might not have made the link with Star Wars. In the end I’m happy to stick with harmless pseudo-random numbers, for safety reasons.
To illustrate generally complex probability density functions on continuous spaces, researchers always use the same examples, for instance mixtures of Gaussian distributions or a banana shaped distribution defined on with density function:
If we draw a sample from this distribution using MCMC we obtain a [scatter]plot like this one:
Clearly it doesn’t really look like a banana, even if you use yellow to colour the dots like here. Actually it looks more like a boomerang, if anything. I was worried about this for a while, until I came up with a more realistic banana shaped distribution:
See how the shape is well defined compared to the first figure? And there’s even the little tail, that proves so convenient when we want to peel off the fruit. More generally we might want to create target density functions based on general shapes. For this you can now try RShapeTarget, which you can install directly from R using devtools:
library(devtools) install_github(repo="RShapeTarget", username="pierrejacob")
The package parses SVG files representing shapes, and creates target densities from them. More precisely, a SVG files contains “paths”, which are sequence of points (for instance the above banana is a single closed path). The associated log density at any point is defined by where is the closest path of the shape from and is the distance between the point and the path. The parameter specifies the rate at which the density decays when the point goes away from the shape. With this you can define the maple leaf distribution, as a tribute to JSM 2013:
In the package you can get a distribution from a SVG file using the following code:
library(RShapeTarget) # create target from file my_shape_target <- create_target_from_shape(my_svg_file_name, lambda =1) # test the log density function on 25 randomly generated points my_shape_target$logd(matrix(rnorm(50), ncol = 2), my_shape_target$algo_parameters)
Since characters are just a bunch of paths, you can also define distributions based on words, for instance:
which is done as follows (warning you’re only allowed a-z and A-Z, no numbers no space no punctuation for now):
library(RShapeTarget) word_target <- create_target_from_word("Hodor")
For the words, I defined the target density function as before, except that it’s constant on the letters: so if a point is outside a letter its density is computed based on the distance to the nearest path; if it’s inside a letter it’s just constant, so that the letters are “filled” with some constant density. I thought it’d look better.
Now I’m not worried about the banana shaped distribution any more, but by the fact that the only word I could think of was “Hodor” (with whom you can chat over there).
I’ll talk in a session organized by Scott Schmidler, entitled Adaptive Monte Carlo Methods for Bayesian Computation; you can find the session programme here [online program]. I’ll talk about score and Fisher observation matrix estimation in state-space models.
According to the rumour and Christian’s reflections on the past years (2009, 2010, 2011), I should prepare my schedule in advance to really enjoy this giant meeting. So if you want to meet there, please send me an e-mail!
See you in Montréal!
We’re at the Big Data era blablabla, but the advanced computational methods usually don’t scale well enough to match the increasing sizes of datasets. For instance, even in a simple case of i.i.d. data and an associated likelihood function , the cost of evaluating the likelihood function at any parameter is typically growing at least linearly with . If you then plug that likelihood into an optimization technique to find the Maximum Likelihood Estimate, or into a sampling technique such as Metropolis-Hastings to sample from the posterior distribution, the computational cost grows accordingly for a fixed number of iterations. However you can get unbiased estimates of the log-likelihood by drawing points uniformly in the index set and by computing . This way you sub-sample from the whole dataset, and you can choose according to your computational budget. However is it possible to perform inference with these estimates instead of the complete log-likelihood?
This blog started as a collaborative blog written by then PhD students at CREST. Now some of us have left the lab but still feel like blogging from time to time so we use this blog. Going further, I don’t see any reason not to broaden our perspective by letting other people participate, in order to maintain a decent activity. The target would be at least a post per week. I am sure that many junior researchers out there feel like they could write a post or two, so this could be the place to share your views!
If you’re interested, either in a one-time blog post or on a more regular basis, please feel free to contact us by e-mail or in the comments below. You can just browse through the blog if you’re not sure what the scope is… actually the scope is pretty ill-defined, but includes tips and tricks in R, LaTeX, conferences in Statistics, mostly Bayesian or computational, random datasets, recent articles, reports on unusual use of statistical methods…
And then I’ve looked at the genealogy of all of my French colleagues and they are all descendants of him, which is rather annoying.
Arnaud Doucet, Sylvain Rubenthaler and I have just put a technical report on arXiv about estimating the first- and second-order derivatives of the log-likelihood (also called the score and the observed information matrix respectively) in general (intractable) statistical models, and in particular in (non-linear non-Gaussian) state-space models. We call them “derivative-free” estimates because they can be computed even if the user cannot compute any kind of derivatives related to the model (as opposed to e.g. this paper and this paper). Actually in some cases of interest we cannot even evaluate the log-likelihood point-wise (we do not have a formula for it), so forget about explicit derivatives. Would you like to know more?
Just a quick note about BayesComp, a new wiki about Bayesian Computational Statistics (see this outdated but well-written introduction if you really don’t know what that is), as Xian pointed out. It is organised by the ISBA Section on Bayesian Computation, notably Peter Green and Nicolas Chopin so far. If the community gets into it, it could become the nerve centre for online resources about Bayesian Computation, which so far are quite scattered and poorly advertised.
Good luck to BayesComp!