# Statisfaction

## A glimps of Inverse Problems

Posted in General, Seminar/Conference, Statistics by JB Salomond on 15 November 2012

Hi folks !

Last Tuesday a seminar on Bayesian procedure for inverse problems took place at CREST. We had time for two presentations of young researchers Bartek Knapik and Kolyan Ray. Both presentations deal with the problem of observing a noisy version of a linear transform of the parameter of interest

$Y_i = K\mu + \frac{1}{\sqrt{n}} Z$
where $K$ is a linear operator and $Z$ a Gaussian white noise.  Both presentations considered asymptotic properties of the posterior distribution (Their papers can be found on arxiv, here for Bartek’s, and here for Kolyan’s). There is a wide literature on asymptotic properties of the posterior distribution in direc models. When looking at the concentration of $f$ toward a true distribution $f_0$  given the data, with respect to some distance $d(.,.)$,  well known problem is to derive concentration rates, that is the rate $\epsilon_n$ such that

$\pi(d(f,f_0) > \epsilon_n | X^n) \to 0.$

For inverse problems, the usual methods as introduced by Ghosal, Ghosh and van der Vaart (2000) usually fails, and thus results in this settings are in general difficult to obtain.

Bartek presented some very refined results in the conjugate case. He manages to get some results on the concentration rates of the posterior distribution, on Bayesian Credible Sets and Bernstein – Von Mises theorems – that states that the posterior is asymptotically Gaussian – when estimating a linear functional of the parameter of interest. Kolyan got some general conditions on the prior to achieve concentration rate, and prove that these techniques leads to optimal concentration rates for classical models.

I only knew little about inverse problems but both talks were very accessible and I will surely get more involved in this field !

### 2 Responses

1. Julyan said, on 16 November 2012 at 17:17

Good to see you here bro 🙂

2. Bartek said, on 22 November 2012 at 12:51

Hello JB,

Thank you for this nice post, that would be indeed nice if you got interested in the field of Bayesian approach to inverse problems. It is definitely an interesting thing to do, and there are plenty of open problems and possible paths to pursue.

Let me clarify one thing, namely the Bernstein-von Mises part. My results are indeed in the cojugate setting. Therefore posterior distributions are always Gaussian, not only asymptotically. I obtain a Bernstein-von Mises type result in a semiparametric setting of estimation of linear functionals of the parameter of interest. Let me denote the posterior distribution by N(m_n, S_n). The distribution of the posterior mean m_n is also Gaussian with covariance T_n. These are some Gaussian random elements, either in R, or in some Hilbert space. On top of normality of the posterior, we also need that:
(1) S_n and T_n are asymptotically equivalent (loosely speaking, their corresponding eigenvalues are equivalent);
(2) (T_n)^(-1/2)(m_n – m_0) tends in distribution to N(0, I);
(3) m_n is an asymptotically efficient estimator of m_0.

As it turns out, in the fully nonparametric problem the covariances S_n and T_n are not equivalent (using the priors that we consider), but that can happen in the linear functional setting. More on that (in the linear functional setting) can be found in the article you cited, and also in http://projecteuclid.org/euclid.ejs/1305034907.