Nonparametric Bayesian uncertainty quantification: confidence in credible sets? Aad van der Vaart, Leiden University In a nonparametric Bayesian framework a functional parameter (e.g. a regression function) is equipped with a prior and an ordinary Bayesian analysis is performed, with as output the posterior distribution of the function. Typically the prior has a bandwidth parameter attached to it, by which the analysis tries to adapt to the smoothness of the unknown function. In a full (or hierarchical) Bayesian framework one might put a prior on this parameter, while in an empirical Bayesian framework one might estimate this parameter, for instance using the marginal likelihood of the data, and next use the posterior distribution corresponding to the estimated bandwidth. It has been documented in the past decade, particularly for the hierarchical procedure, that this procedure is often successful for reconstructing the function: the posterior distribution contracts to the true regression function at an optimal rate, which is faster if the true function is smoother. However, the core of the Bayesian method is also to use the posterior distribution for quantifying the remaining uncertainty in the analysis, a margin of error on the reconstruction. A credible set, a central set in the posterior of prescribed posterior probability, e.g 95 %, might be used for this purpose. In this talk we discuss the validity of such a procedure, in particular in the situation that the prior bandwidth is adjusted by one the outlined methods. Since uncertainty quantification in a nonparametric setup always requires extrapolation, the procedure can be very misleading. However, the Bayesian procedure also seems to work well in many situations. We introduce a concept of polished tail functions for this purpose. [Based on joint work with Harry van Zanten, Botond Szabo and Suzanne Sniekers.]