In a former post, Pierre wrote about Bayesian model comparison and the limitations of Bayes factors in the presence of vague priors. Here we are, one year later, and I am happy to announce that our joint work with Jie Ding and Vahid Tarokh has been recently accepted for publication. As way of celebrating, allow me to give you another take on the matter.

Given some observations generated as independent draws from , consider the seemingly simple task of choosing between the following two Normal models

: , ,

: , ,

where is a known hyperparameter that controls how vague the prior on is. In such a setting, only model is well-specified (in the sense of containing the true data generating process), and we can reasonably demand from any sensible model selection criterion to be able to correctly select over . The log-Bayes factor, defined as , fits the bill asymptotically, as it is almost surely positive (thus choosing ) for all large enough. However, for any fixed sample size (no matter how large), making the prior on more vague by increasing drives to 0 and to , which has the undesirable effect of tricking the log-Bayes factor into preferring the misspecified model . This is illustrated with in the above picture, where is increased from 0 to 150, then 350. The alarming point is not so much that the value of varies when the prior changes (as it should, mathematically speaking), but rather that these variations can be unbounded and arbitrary, even for changes of prior that have no impact on the inference stage. This phenomenon is concerning when using Bayes factors to perform model selection, since the choice of one prior is able to dictate the conclusion of the procedure even though the “fit” of the model is virtually unchanged beyond a certain vagueness: indeed, as increases, the posterior stabilizes to , leading the posterior predictive distributions given by to essentially coincide with the data generating distribution, for any large fixed and all priors with large enough; yet, as soon as becomes too large, the Bayes factor mechanically (and wrongly) rules out against other misspecified models.

One possible solution to address this sensitivity to priors’ vagueness is to take a decision theoretic approach, by regarding the Bayes factor as a decision rule consisting in choosing the model that minimizes the prequential score , for the particular choice of score function , known as the *log-score*. A score function has an associated divergence function , which quantifies the expected loss incurred when trying to use a distribution to predict a random outcome , whose true distribution is . In particular, the divergence associated to the log-score is the well-known Kullback-Leibler divergence . Finding an alternative to the Bayes factor thus becomes a matter of using a different divergence, and Dawid & Musio (2015) suggest using

,

sometimes referred to as the *relative Fisher information divergence*. Its key feature is that, contrary to the Kullback-Leibler divergence, its value remains unchanged when multiplying by an arbitrary constant (which is effectively what happens when making a prior more vague, as its normalizing constant becomes arbitrary). The associated score is the *Hyvärinen score*, defined as

.

One would then select the model minimizing the *H-score*, defined as , which is now robust to any arbitrary vagueness in the specification of priors, as shown in the picture (where *H-factors* simply refer to differences of H-scores). In practice, H-scores need to be estimated. This can be achieved consistently using sequential Monte Carlo methods (e.g. IBIS when the likelihood is available, or SMC^{2} for state-space models with intractable likelihoods). We can also prove (under strong assumptions) that the H-score leads to consistent model selection. If interested, more can be learned here; the code for the figure is provided here.

Thanks for reading !

Interesting post and paper! Can you actually justify minimizing the H-score as a Bayes action in an expected utility problem? I see how Bayes factors would come out of this (e.g. as in Bernardo and Smith, Chapter 6.1, p391), but I’m not sure I see how to introduce another type of “score”. When you said “decision-theoretic justifuctation”, I thought you had such a point, but now I’m not sure that is what you meant.

Thank you for your comment, Rémi ! By “decision-theoretic justification”, we were alluding to something along the lines of p.403-405 from Bernardo and Smith (1994, Section 6.1.6), i.e. comparing models in an M-open setting based on their predictive performances, albeit with the Hyvärinen score (instead of the log score) to assess predictions, and a prequential framework à la Dawid (1984, 1992) to approximate the expected loss (instead of more general cross-validation schemes). Although there are a few remarks on prequential analysis in Bernardo and Smith (1994, e.g. see p.485 and 488) suggesting that the prequential framework does not exactly fall within Bayesian decision theory, Dawid (1992) still uses decision theoretic ideas to define actions and prequential losses when regarding models as probability forecasting systems.

Thanks for the documented answer!