Hi there !
Unfortunately this post is indeed about statistics…
If you are randomly walking around the statistics blogs, you probably have certainly heard of this new language called Julia. It is said by the developers to be as easy to write as R and as fast as C (!) which is quite a catchy way of selling their work. After talking with a Julia enthusiastic user in Amsterdam, I decided to give it a try. And here I am sharing my first impressions.
Fist thing first, the installation is as easy as any other language, plus there is a neat Package management that allows you to get started quite easily. In this respect it is very similar to R.
On the minus side I became a big fan of RStudio Julian (… oupsy Julyan) told you about a long time ago. These kind of programs really make your life easier. I thus tried Juno which turned out to be cumbersome and terribly slow. I would have loved to have an IDE for Julia that would be up to the RStudio standard. Nevermind.
No lets talk a little about what is really interesting : “Is their catch phrase false advertising or not?!”.
There is a bunch of relatively good tutorials online which are really helpful to learn the basic vocabulary, but indeed if like me you are use to code in R and/or Python, you should get it pretty fast and can almost copy-paste your favourite code into Julia and with a few adjustments, it will work. So as easy to write as R : quite so.
I then tried to compare computational times for some of my latest codes and there came the good surprise ! A code that would take a handful of minutes to run in R mainly due to unavoidable loops took a couple of seconds to run in Julia, without any other sorts of optimization. The handling of big objects is smooth and I did not ran into memory problems that R was suffering from.
So far so good ! But of course there has to be some drawbacks. The first one is the poor package repository compare to CRAN or even what you can get for Python. This might of course improve in the next few years as the language is still quite new. However, it is bothering to have to re-code something when you are used to simply load a package in R. Another, probably less important problem, is the lack of data visualization methods and especially the absence of ggplot2 that we have grown quite found of around here. There is of course Gadfly, which is quite close but once again, it is up to now very limited compared to what I was used to…
All in all, I am happy to have tried Julia, and I am quite sure that I will be using it quite a lot from now on. However, even if from a efficiency point of view, it is great, and it is way easier to learn than C (which I should have done a while ago), R and its tremendous package repository is far from beaten.
Oh and by the way, it uses PyPlot based on MatplotLib that allow you to make some xkcd-like plots, which can make your presentations a lot more fun.
Hi there !
Like Pierre a while ago, I got fed up with printing articles, annotating them, losing them, re-printing them, and so on. Moreover, I also wanted to be able to carry more than one or two books in my bag without ruining my back. E-Ink readers seemed good but at some point I changed my mind…
After the ISBA conference in Kyoto, where I saw bazillions of IPads, I thought that tablets really worth the shot. I am cool with reading on a LCD screen, I probably won’t read scientific articles/books outside in the sun, and I like the idea of a light device that can replace my laptop in conferences. Furthermore, there is now a large choice of apps to annotate pdf which is crucial for me.
The device I chose run on Android (mainly because there is no memory extension on Apple devices), combined with a good capacitive pen, an annotation app such as eZreader that get your pdf directly from Dropbox (which is simply awesome). You can even use LaTeX (without fancy packages…) which may become handy.
I hope that I will not experience the same disappointment as Pierre did with his reader, but for the moment a tablet seems just what I needed !
Hi folks !
Last Tuesday a seminar on Bayesian procedure for inverse problems took place at CREST. We had time for two presentations of young researchers Bartek Knapik and Kolyan Ray. Both presentations deal with the problem of observing a noisy version of a linear transform of the parameter of interest
where is a linear operator and a Gaussian white noise. Both presentations considered asymptotic properties of the posterior distribution (Their papers can be found on arxiv, here for Bartek’s, and here for Kolyan’s). There is a wide literature on asymptotic properties of the posterior distribution in direc models. When looking at the concentration of toward a true distribution given the data, with respect to some distance , well known problem is to derive concentration rates, that is the rate such that
For inverse problems, the usual methods as introduced by Ghosal, Ghosh and van der Vaart (2000) usually fails, and thus results in this settings are in general difficult to obtain.
Bartek presented some very refined results in the conjugate case. He manages to get some results on the concentration rates of the posterior distribution, on Bayesian Credible Sets and Bernstein – Von Mises theorems – that states that the posterior is asymptotically Gaussian – when estimating a linear functional of the parameter of interest. Kolyan got some general conditions on the prior to achieve concentration rate, and prove that these techniques leads to optimal concentration rates for classical models.
I only knew little about inverse problems but both talks were very accessible and I will surely get more involved in this field !