Statisfaction

Moustache target distribution and Wes Anderson

Posted in Art, Geek, R by Pierre Jacob on 31 March 2014

Today I am going to introduce the moustache target distribution (moustarget distribution for brievety). Load some packages first.

library(wesanderson) # on CRAN
library(RShapeTarget) # available on https://github.com/pierrejacob/RShapeTarget/
library(PAWL) # on CRAN

Let’s invoke the moustarget distribution.

 shape <- create_target_from_shape(
file_name=system.file(package = "RShapeTarget", "extdata/moustache.svg"),
lambda=5)
rinit <- function(size) matrix(rnorm(2*size), ncol = 2)
moustarget <- target(name = "moustache", dimension = 2,
rinit = rinit, logdensity = shape$logd,
parameters = shape$algo_parameters)

This defines a target distribution represented by a SVG file using RShapeTarget. The target probability density function is defined on \mathbb{R}^2 and is proportional to 1 on the segments described in the SVG files, and decreases exponentially fast to 0 away from the segments. The density function of the moustarget is plotted below, a picture being worth a thousand words.

(more…)

Beautiful Science: Picturing Data, Inspiring Insight

Posted in General by Pierre Jacob on 19 March 2014

Martin Krzywinski, Circles of Life, 2013 (© Martin Krzywinski)

Hey,

There’s a nice exhibition open until May 26th at the British Library in London, entitled Beautiful Science: Picturing Data, Inspiring Insight. Various examples of data visualizations are shown, either historical or very modern, or even made especially for the exhibition. Definitely worth a detour if you happen to be in the area, you can see everything in 15 minutes.

In particular there are nice visualisations of historical climate data, gathered from the logbooks of the English East India company, whose ships were crossing every possible sea in the beginning of the 19th century. The logbooks contain locations and daily weather reports, handwritten by the captains themselves. Turns out the logbooks are kept at the British Library itself and some of them are on display at the exhibition. More info on that project here: oldweather.org.

Rasmus Bååth’s Bayesian first aid

Posted in Project, R, Statistics by Pierre Jacob on 23 January 2014

Besides having coded a pretty cool MCMC app in Javascript, this guy Rasmus Bååth has started the Bayesian first aid project. The idea is that if there’s an R function called blabla.test performing test “blabla”, there should be a function bayes.blabla.test performing a similar test in a Bayesian framework, and showing the output in a similar way so that the user can easily compare both approaches.This post explains it all. Jags and BEST seem to be the two main workhorses under the hood.

Kudos to Rasmus for this very practical approach, potentially very impactful. Maybe someday people will have to specify if they want a frequentist approach and not the other way around! (I had a dream, etc).

MCM’Ski lessons

Posted in Seminar/Conference by Pierre Jacob on 16 January 2014

A few days after the MCMSki conference, I start to see the main lessons gathered there.

  1. I should really read the full program before attending the next MCMSki. The three parallel sessions looked consistently interesting, and I really regret having missed some talks (in particular Dawn Woodard‘s and Natesh Pillai‘s) and some posters as well (admittedly, due to exhaustion on my part).
  2. Compared to the previous instance three years ago (in Utah), the main themes have significantly changed. Scalability, approximate methods, non-asymptotic results, 1/n methods … these keywords are now on everyone’s lips. Can’t wait to see if MCQMC’14 will feel that different from MCQMC’12.
  3. The community is rightfully concerned about scaling Monte Carlo methods to big data, with some people pointing out that models should also be rethought in this new context.
  4. The place of software developers in the conference, or simply references to software packages in the talks, is much greater than it used to be. It’s a very good sign towards reproducible research in our field. There’s still a lot of work to do, in particular in terms of making parallel computing easier to access (time to advertise LibBi a little bit). On a related note, many people now point out whether their proposed algorithms are parallel-friendly or not.
  5. Going from the Rockies to the Alps, the food drastically changed from cheeseburgers to just melted cheese. Bread could be found but ground beef and Budweiser were reported missing.
  6. It’s fun to have an international conference in your home country, but switching from French to English all the time was confusing.

Back in flooded Oxford now!

Dennis Lindley (1923-2013)

Posted in General by Pierre Jacob on 16 December 2013

Pierre Jacob:

I’ve just heard this sad piece of news. Definitely one of the greatest statisticians of the last 50 years. Wished I’d had met him in person.

Originally posted on Xi'an's Og:

Dennis Lindley most sadly passed away yesterday at the hospital near his home in Somerset. He was one of the founding fathers of our field (of Bayesian statistics), who contributed to formalise Bayesian statistics in a coherent theory. And to make it one with rational decision-making, a perspective missing in Jeffreys’ vision. (His papers figured prominently in the tutorials we gave yesterday for the opening of O’Bayes 250.) At the age of 90, his interest in the topic had not waned away: as his interview with Tony O’Hagan last Spring showed, his passionate arguing for the rationale of the Bayesian approach was still there and alive! The review he wrote of The Black Swan a few years ago also demonstrated he had preserved his ability to see through bogus arguments. (See his scathing “One hardly advances the respect with which statisticians are held in society by making…

View original 142 more words

Singapore –> Oxford

Posted in General by Pierre Jacob on 3 October 2013

Image

A quick post to say that I’m moving from Singapore to Oxford, UK. I will dearly miss my Singaporean colleagues, as well as my morning Laksa and Nasi lemak. I look forward to the skying season though.

I will work for the next two years as a post-doc with Professors Arnaud Doucet and Yee Whye Teh, on sequential Monte Carlo methods for high-dimensional problems.

Former office mate Alex Thiery is still in Singapore and will start blogging here soon, so we’ll still have two continents covered. Still looking for contributors in the other ones!

Clone wars inside the uniform random variable

Posted in General by Pierre Jacob on 25 September 2013

Hello,

In a recent post Nicolas discussed some limitation of pseudo-random number generation. On a related note there’s  a feature of random variables that I find close to mystical.

In an on-going work with Alex Thiery, we had to precisely define the notion of randomized algorithms at some point, and we essentially followed Keane and O’Brien [1994] (as it happens there’s an article today on arXiv that also is related, maybe, or not). The difficulty comes with the randomness. We can think of a deterministic algorithm as a good old function mapping an input space to an output space, but a random algorithm  adds some randomness over a deterministic scheme (in an accept-reject step for instance, or a random stopping criterion), so that given fixed inputs the output might still vary. One way to formalise it consists in defining the algorithm as a deterministic function of inputs and of a source of randomness; that randomness is represented by  a single random variable U e.g. following an uniform distribution.

The funny, mystical and disturbing thing is that a single uniform random variable is enough to represent an infinity of them. It sounds like an excerpt of the Vedas, doesn’t it? To see this, write a single uniform realization in binary representation. That is, for U \in [0,1] write

U = \sum_{k> 0} b_k 2^{-k}

with b_k = \mbox{floor}(2^k U) \mbox{ mod } 2. The binary representation is b_1b_2b_3b_4b_5\ldots

Realization of an uniform r.v. in binary representation

Realization of an uniform random variable in binary representation

Now it’s easy to see that these zeros and ones are distributed as independent Bernoulli variables. Now we put these digits in a particular position, as follows.

Same zeros and ones ordered in an increasing square

Same zeros and ones ordered in a triangle of increasing size

If we take each column or each row from the grid above, they’re independent and they’re also binary representations of uniform random variables – you could also consider diagonals or more funky patterns. You could say that the random variable contains an infinity of independent clones.

This property actually sounds dangerous now, come to think of it. I think it was always well-known but people might not have made the link with Star Wars. In the end I’m happy to stick with harmless pseudo-random numbers, for safety reasons.

From SVG to probability distributions [with R package]

Posted in R, Statistics by Pierre Jacob on 25 August 2013

Hey,

To illustrate generally complex probability density functions on continuous spaces, researchers always use the same examples, for instance mixtures of Gaussian distributions or a banana shaped distribution defined on \mathbb{R}^2 with density function:

f(x,y) = \exp\left(-\frac{x^2}{200} - \frac{1}{2}(y+Bx^2-100B)^2\right)

If we draw a sample from this distribution using MCMC we obtain a [scatter]plot like this one:

A sample from the very lame banana shaped distribution

Fig. 1: a sample from the very lame banana shaped distribution

Clearly it doesn’t really look like a banana, even if you use yellow to colour the dots like here. Actually it looks more like a boomerang, if anything. I was worried about this for a while, until I came up with a more realistic banana shaped distribution:

A sample from the realistic banana shaped distribution

Fig. 2: a sample from the realistic banana shaped distribution

See how the shape is well defined compared to the first figure? And there’s even the little tail, that proves so convenient when we want to peel off the fruit. More generally we might want to create target density functions based on general shapes. For this you can now try RShapeTarget, which you can install directly from R using devtools:

library(devtools)
install_github(repo="RShapeTarget", username="pierrejacob")

The package parses SVG files representing shapes, and creates target densities from them. More precisely, a SVG files contains “paths”, which are sequence of points (for instance the above banana is a single closed path). The associated log density at any point x is defined by -1/(2\lambda) \times d(x, P) where P is the closest path of the shape from x and d(x,P) is the distance between the point and the path. The parameter \lambda specifies the rate at which the density decays when the point goes away from the shape. With this you can define the maple leaf distribution, as a tribute to JSM 2013:

Fig. 3: a sample the "O Canada" probability distribution.

Fig. 3: a sample the “O Canada” probability distribution.

In the package you can get a distribution from a SVG file using the following code:

library(RShapeTarget)
# create target from file
my_shape_target <- create_target_from_shape(my_svg_file_name, lambda =1)
# test the log density function on 25 randomly generated points
my_shape_target$logd(matrix(rnorm(50), ncol = 2), my_shape_target$algo_parameters)

Since characters are just a bunch of paths, you can also define distributions based on words, for instance:

Fig. 5: Hodor.

Hodor: Hodor.

which is done as follows (warning you’re only allowed a-z and A-Z, no numbers no space no punctuation for now):

library(RShapeTarget)
word_target <- create_target_from_word("Hodor")

For the words, I defined the target density function as before, except that it’s constant on the letters: so if a point is outside a letter its density is computed based on the distance to the nearest path; if it’s inside a letter it’s just constant, so that the letters are “filled” with some constant density. I thought it’d look better.

Now I’m not worried about the banana shaped distribution any more, but by the fact that the only word I could think of was “Hodor” (with whom you can chat over there).

Joint Statistical Meeting 2013

Posted in General, Seminar/Conference, Statistics by Pierre Jacob on 23 July 2013
A typical statistical meeting.

A typical statistical meeting.

Hey,

In a few weeks (August 3-8) I’ll attend the Joint Statistical Meeting in Montréal, Canada. According to Wikipedia it’s been held every year since 1840 and now gathers more than 5,000 participants!

I’ll talk in a session organized by Scott Schmidler, entitled Adaptive Monte Carlo Methods for Bayesian Computation; you can find the session programme here [online program]. I’ll talk about score and Fisher observation matrix estimation in state-space models.

According to the rumour and Christian’s reflections on the past years (2009, 2010, 2011), I should prepare my schedule in advance to really enjoy this giant meeting. So if you want to meet there, please send me an e-mail!

See you in Montréal!

Path storage in the particle filter

Posted in Statistics by Pierre Jacob on 12 July 2013
Typical ancestry tree generated by a particle filter

Typical ancestry tree generated by a particle filter

Hey particle lovers,

With Lawrence Murray and Sylvain Rubenthaler we looked at how to store the paths in the particle filter, and the related expected memory cost. We just arXived a technical report about it. Would you like to know more?

(more…)

Follow

Get every new post delivered to your Inbox.

Join 50 other followers

%d bloggers like this: