Bayesian workshop in Grenoble, September 6-7

Posted in General, Seminar/Conference by Julyan Arbel on 23 May 2018


We are organising a two-day Bayesian workshop in Grenoble in September 6-7, 2018. It will be the second edition of the Italian-French statistics seminar (link to first edition), titled this year: Bayesian learning theory for complex data modeling. The workshop will give to young statisticians the opportunity to learn from and interact with highly qualified senior researchers in probability, theoretical and applied statistics, with a particular focus on Bayesian methods.

Anyone interested in this field is welcome. There will be two junior sessions and a poster session with a call for abstract open until June 30. A particular focus will be given to researchers in the early stage of their career, or currently studying for a PhD, MSc or BSc. The junior session is supported by ISBA through travel awards.

There will be a social dinner on September 6, and a hike organised in the mountains on September 8.

Confirmed invited speakers

• Simon Barthelmé, Gipsa-lab, Grenoble, France
• Arnoldo Frigessi, University of Oslo, Norway
• Benjamin Guedj, Inria Lille – Nord Europe, France
• Alessandra Guglielmi, Politecnico di Milano, Italy
• Antonio Lijoi, University Bocconi, Milan, Italy
• Bernardo Nipoti, Trinity College Dublin, Ireland
• Sonia Petrone, University Bocconi, Milan, Italy

Important Dates:

• June 30, 2018: Abstract submission closes
• July 20, 2018: Notification on abstract acceptance
• August 25, 2018: Registration closes

More details and how to register:

We look forward to seeing you in Grenoble.



Tagged with: , ,

AI in Grenoble, 2nd to 6th July 2018

Posted in General, Seminar/Conference by Julyan Arbel on 22 March 2018

visuel PAISS T2

This is an advertisement for on conference on AI organised at Inria Grenoble by Thoth team and Naver labs : This AI summer school comprises lectures and practical sessions conducted by renowned experts in different areas of artificial intelligence.

This event is the revival of a past series of very successful summer schools which took place in Grenoble and Paris. The latest edition of this series was held in 2013. While originally focusing on computer vision, the summer school now targets a broader AI audience, and will also include presentations about machine learning, natural language processing, robotics, and cognitive science.

Note that NAVER LABS is funding a number of students to attend PAISS. Apply before 4th April. (more…)

Tagged with: , , ,

Sub-Gaussian property for the Beta distribution (part 3, final)

Posted in General, R by Julyan Arbel on 26 December 2017

When a Beta random variable wants to act like a Bernoulli: convergence of optimal proxy variance.

In this third and last post about the Sub-Gaussian property for the Beta distribution [1] (post 1 and post 2), I would like to show the interplay with the Bernoulli distribution as well as some connexions with optimal transport (OT is a hot topic in general, and also on this blog with Pierre’s posts on Wasserstein ABC). (more…)

Sub-Gaussian property for the Beta distribution (part 2)

Posted in R by Julyan Arbel on 20 December 2017


Left: What makes the Beta optimal proxy variance (red) so special? Right: The difference function has a double zero (black dot).

As a follow-up on my previous post on the sub-Gaussian property for the Beta distribution [1], I’ll give here a visual illustration of the proof.

A random variable X with finite mean \mu=\mathbb{E}[X] is sub-Gaussian if there is a positive number \sigma such that:

\mathbb{E}[\exp(\lambda (X-\mu))]\le\exp\left(\frac{\lambda^2\sigma^2}{2}\right)\,\,\text{for all } \lambda\in\mathbb{R}.

We focus on X being a Beta(\alpha,\beta) random variable. Its moment generating function \mathbb{E}[\exp(\lambda X)] is known as the Kummer function, or confluent hypergeometric function _1F_1(\alpha,\alpha+\beta,\lambda). So is \sigma^2-sub-Gaussian as soon as the difference function


remains positive on \mathbb{R}. This difference function u_\sigma(\cdot) is plotted on the right panel above for parameters (\alpha,\beta)=(1,1.3). In the plot, \sigma^2 is varying from green for the variance \text{Var}[X]=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} (which is a lower bound to the optimal proxy variance) to blue for the value \frac{1}{4(\alpha+\beta+1)}, a simple upper bound given by Elder (2016), [2]. The idea of the proof is simple: the optimal proxy-variance corresponds to the value of \sigma^2 for which u_\sigma(\cdot) admits a double zero, as illustrated with the red curve (black dot). The left panel shows the curves with \mu = \frac{\alpha}{\alpha+\beta} varying, interpolating from green for \text{Var}[X]=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} to blue for \frac{1}{4(\alpha+\beta+1)}, with only one curve qualifying as the optimal proxy variance in red.


[1] Marchal and Arbel (2017), On the sub-Gaussianity of the Beta and Dirichlet distributions. Electronic Communications in Probability, 22:1–14, 2017. Code on GitHub.
[2] Elder (2016), Bayesian Adaptive Data Analysis Guarantees from Subgaussianity,

Tagged with:

Dynamic publication list for research webpage using arXiv, HAL, or bibtex2html

Posted in General by Julyan Arbel on 23 October 2017

Well of course, dynamic is conditional upon some manual feeding. If you put your papers on arXiv or HAL, then those two propose dynamic widgets. If you maintain a .bib file of your papers, you can use tools like bibtex2html. This is not dynamic at all, but it allows for finer tuning of url links you might want to add than with arXiv or HAL options. I review below those three options. (more…)

ISBA elections, let’s go voting

Posted in General by Julyan Arbel on 16 October 2017

So it seems even Thomas B. went voting.

The International Society for Bayesian Analysis (ISBA), is running elections until November, 15. This year, two contributors on this blog, Nicolas Chopin and myself, are running for an ISBA Section office. The sections of the society, nine in number as of today, gather researchers with common research interests: Computation, Objective Bayes, Nonparametrics, etc.

Here are our candidate statements:


Tagged with: ,

New R user community in Grenoble, France

Posted in R, Seminar/Conference by Julyan Arbel on 13 September 2017


Nine R user communities already exist in France and there is a much large number of R communities around the world. It was time for Grenoble to start its own!

The goal of the R user group is to facilitate the identification of local useRs, to initiate contacts, and to organise experience and knowledge sharing sessions. The group is open to any local useR interested in learning and sharing knowledge about R.

The group’s website features a map and table with members of the R group. Members with specific skills related to the use of R are referenced in a table and can be contacted by other members.  A gitter allows members to discuss R issues and a calendar presents the upcoming events.  (more…)

Tagged with: , ,

School of Statistics for Astrophysics, Autrans, France, October 9-13

Posted in General by Julyan Arbel on 7 September 2017

Didier Fraix-Burnet (IPAG), Stéphane Girard (Inria) and myself are organising a School of Statistics for Astrophysics, Stat4Astro, to be held in October in France. The primary goal of the School is to train astronomers to the use of modern statistical techniques. It also aims at bridging the gap between the two communities by emphasising on the practice during works in common, to give firm grounds to the theoretical lessons, and to initiate works on problems brought by the participants. There have been two previous sessions of this school, one on regression and one on clustering. The speakers of this edition, including Christian Robert, Roberto Trotta and David van Dyk, will focus on the Bayesian methodology, with the moral support of the Bayesian Society, ISBA. The interest of this statistical approach in astrophysics probably comes from its necessity and its success in determining the cosmological parameters from observations, especially from the cosmic background fluctuations.  The cosmological community has thus been very active in this field (see for instance the Cosmostatistics Initiative COIN).

But the Bayesian methodology, complementary to the more classical frequentist one, has many applications in physics in general due to its faculty to incorporate a priori knowledge into the inference computation, such as the uncertainties brought by the observational processes.

As for sophisticated statistical techniques, astronomers are not familiar with Bayesian methodology in general, while it is becoming more and more widespread and useful in the literature. This school will form the participants to both a strong theoretical background and a solid practice of Bayesian inference:

  • Introduction to R and Bayesian Statistics (Didier Fraix-Burnet, Institut de Planétologie et d’Astrophysique de Grenoble)
  • Foundations of Bayesian Inference (David van Dyk, Imperial College London)
  • Markov chain Monte Carlo (David van Dyk, Imperial College London)
  • Model Building (David van Dyk, Imperial College London)
  • Nested Sampling, Model Selection, and Bayesian Hierarchical Models (Roberto Trotta, Imperial College London)
  • Approximate Bayesian Computation (Christian Robert, Univ. Paris-Dauphine, Univ. Warwick and Xi’an (!))
  • Bayesian Nonparametric Approaches to Clustering (Julyan Arbel, Université Grenoble Alpes and Inria)

Feel free to register, we are not fully booked yet!


Sub-Gaussian property for the Beta distribution (part 1)

Posted in General by Julyan Arbel on 2 May 2017


With my friend Olivier Marchal (mathematician, not filmmaker, nor the cop), we have just arXived a note on the sub-Gaussianity of the Beta and Dirichlet distributions.

The notion, introduced by Jean-Pierre Kahane, is as follows:

A random variable X with finite mean \mu=\mathbb{E}[X] is sub-Gaussian if there is a positive number \sigma such that:

\mathbb{E}[\exp(\lambda (X-\mu))]\le\exp\left(\frac{\lambda^2\sigma^2}{2}\right)\,\,\text{for all } \lambda\in\mathbb{R}.

Such a constant \sigma^2 is called a proxy variance, and we say that X is \sigma^2-sub-Gaussian. If X is sub-Gaussian, one is usually interested in the optimal proxy variance:

 \sigma_{\text{opt}}^2(X)=\min\{\sigma^2\geq 0\text{ such that } X \text{ is } \sigma^2\text{-sub-Gaussian}\}.

Note that the variance always gives a lower bound on the optimal proxy variance: \text{Var}[X]\leq \sigma_{\text{opt}}^2(X). In particular, when \sigma_{\text{opt}}^2(X)=\text{Var}[X], X is said to be strictly sub-Gaussian.

The sub-Gaussian property is closely related to the tails of the distribution. Intuitively, being sub-Gaussian amounts to having tails lighter than a Gaussian. This is actually a characterization of the property. Let Z\sim\mathcal{N}(0,1). Then:

X \text{ is sub-Gaussian } \iff \exists c, \forall x\geq0:\, \mathsf{P}(|X-\mathbb{E}[X]|\geq x) \leq c\mathsf{P}(|Z|\geq x).

That equivalence clearly implies exponential upper bounds for the tails of the distribution since a Gaussian Z\sim\mathcal{N}(0,\sigma^2) satisfies

\mathsf{P}(Z\ge x)\le\exp(-\frac{x^2}{2\sigma^2}).

That can also be seen directly: for a \sigma^2-sub-Gaussian variable X,

\forall\, \lambda>0\,:\,\,\mathsf{P}(X-\mu\geq x) = \mathsf{P}(e^{\lambda(X-\mu)}\geq e^{\lambda x})\leq \frac{\mathbb{E}[e^{\lambda(X-\mu)}]}{e^{\lambda x}}\quad\text{by Markov inequality,}

\leq\exp(\frac{\sigma^2\lambda^2}{2}-\lambda x)\quad\text{by sub-Gaussianity.}

The polynomial function \lambda\mapsto \frac{\sigma^2\lambda^2}{2}-\lambda x is minimized on \mathbb{R}_+ at \lambda = \frac{x}{\sigma^2}, for which we obtain

\mathsf{P}(X-\mu\geq x) \leq\exp(-\frac{x^2}{2\sigma^2}).

In that sense, the sub-Gaussian property of any compactly supported random variable X comes for free since in that case the tails are obviously lighter than those of a Gaussian. A simple general proxy variance is given by Hoeffding’s lemma. Let X be supported on [a,b] with \mathbb{E}[X]=0. Then for any \lambda\in\mathbb{R},

\mathbb{E}[\exp(\lambda X)]\leq\exp\left(\frac{(b-a)^2}{8}\lambda^2\right)

so X is \frac{(b-a)^2}{4}-sub-Gaussian.

Back to the Beta where [a,b]=[0,1], this shows the Beta is \frac{1}{4}-sub-Gaussian. The question of finding the optimal proxy variance is a more challenging issue. In addition to characterizing the optimal proxy variance of the Beta distribution in the note, we provide the simple upper bound \frac{1}{4(\alpha+\beta+1)}. It matches with Hoeffding’s bound for the extremal case \alpha\to0, \beta\to0, where the Beta random variable concentrates on the two-point set \{0,1\} (and when Hoeffding’s bound is tight).

In getting the bound \frac{1}{4(\alpha+\beta+1)}, we prove a recent conjecture made by Sam Elder in the context of Bayesian adaptive data analysis. I’ll say more about getting the optimal proxy variance in a next post soon.



Tagged with:

Faà di Bruno’s note on eponymous formula, trilingual version

Posted in General by Julyan Arbel on 20 December 2016


The Italian mathematician Francesco Faà di Bruno was born in Alessandria (Piedmont, Italy) in 1825 and died in Turin in 1888. At the time of his birth, Piedmont used to be part of the Kingdom of Sardinia, led by the Dukes of Savoy. Italy was then unified in 1861, and the Kingdom of Sardinia became the Kingdom of Italy, of which Turin was declared the first capital. At that time, Piedmontese used to commonly speak both Italian and French.

Faà di Bruno is probably best known today for the eponymous formula which generalizes the derivative of a composition of two functions, \phi\circ \psi, to any order:

(\phi\circ \psi)^{(n)} = \sum \frac{n!}{m_1!\,\ldots m_n!}\phi^{(m_1+\,\cdots \,+m_n)}\circ \psi \cdot \prod_{i=1}^n\left(\frac{\psi^{(j)}}{j!}\right)^{m_j}

over n-tuples (m_1,\,\ldots \,, m_n) satisfying \sum_{j=1}^{n}j m_j = n.

Faà di Bruno published his formula in two notes:

  • Faà Di Bruno, F. (1855). Sullo sviluppo delle funzioni. Annali di Scienze Matematiche e Fisiche, 6:479–480. Google Books link.
  • Faà Di Bruno, F. (1857). Note sur une nouvelle formule de calcul différentiel. Quarterly Journal of Pure and Applied Mathematics, 1:359–360. Google Books link.

They both date from December 1855, and were signed in Paris. They are similar and essentially state the formula without a proof. I have arXived a note which contains a translation from the French version to English (reproduced below), as well as the two original notes in French and in Italian. I’ve used for this the Erasmus MMXVI font, thanks Xian for sharing! (more…)

%d bloggers like this: