- Home >
- Winter 2020-2021
- Thursday 17.12.2020, 1pm, Webex
Hélène Halconruy (Télécom ParisTech), Kernel selection in nonparametric regression
Abstract: Adaptative estimation has become a key issue in nonparametric estimation and has been widely investigated for years. Several data-driven procedures were designed to achieve the flexible selection of a performing estimator from a collection of preliminary ones. Among them, the Goldenshluger-Lepski’s method (2011) provides a bandwidth selection procedure for kernel estimators that reaches the bias-variance compromise. However, its difficult implementation led C. Lacour, P. Massart and V. Rivoirard (2017) to introduce the so-called Penalized Cross Overfitting (PCO) estimator based on the concept of minimal penalty.
In a joint work with Nicolas Marie, we investigate a regression model for which we select via the PCO method an estimator of ( is the regression function, the density of the input variable) and prove an oracle inequality for the associated adaptative estimator. Within this broad framework, our result generally performs. It enables to cover the well-known bandwidth selection for kernel-based estimators and to extend the method to the dimension selection for anisotropic projection estimators.
- Thursday, 19.11.2020, 1pm, Webex
Xiaochuan Yang (University of Bath.), Quantitative two-scale stabilisation on the Poisson space
Abstract: Stabilisation theory was initiated in the early 2000 by Penrose and Yukich as a kind of high level abstraction of the famous CLT for minimal spanning trees of Kesten and Lee. Since its birth, this beautiful theory constitutes one of the most fundamental ideas for proving Gaussian approximation of stochastic geometric models, e.g. coverage processes, random tessellations, spatial networks etc. In a recent joint work with G. Peccati and R. Lachieze-Rey, we develop a quantitative stabilization theory which gives rates of multivariate Gaussian approximation for general stabilizing Poisson functionals, extending some estimates from a recent paper of Chatterjee and Sen on the rate of normal convergence of minimal spanning trees. Several examples are worked out to illustrate our results, including the online nearest neighbor graphs, edge statistics of Euclidean minimal spanning trees, and excursion of heavy tailed shot noise random fields.
- Thursday, 12.11.2020, 1pm, Webex
James Thompson (University of Luxembourg.), An Agent-Based Model of COVID-19
Abstract: The ongoing coronavirus pandemic is the most disruptive event in recent history. It is of vital importance that we continue to build a rigorous understanding of how the SARS-CoV-2 virus spreads within the human population, and predict the impact of interventions. Our team has approached this problem by developing a detailed agent-based model of COVID-19, offering unique insights into the dynamics and control of the disease. While we have focused on simulating the epidemic in Luxembourg, the model has been constructed in such a way that it can also be applied to other regions. This project is a joint work with Stephen Wattam and Mikolaj Kasprzak, and is currently a work in progress.
- Thursday 29.10.2020, 1pm, Webex
Martin Wahl (Humboldt-Universität zu Berlin), Upper and lower bounds for the estimation of principal components
Abstract: In settings where the number of observations is comparable to the dimension, principal component analysis (PCA) reveals some unexpected phenomena, ranging from eigenprojector inconsistency to eigenvalue upward bias. While such high-dimensional phenomena are now well understood in the spiked covariance model, the goal of this talk is to discuss some extensions for the case of PCA in infinite dimensions.
In such scenarios the spiked covariance model becomes less important and typically different eigenvalue decay assumptions are investigated instead. Our main results show that the behavior of eigenvalues and eigenprojectors of empirical covariance operators can be characterized by the so-called “relative ranks”. The proofs rely on a novel perturbation-theoretic framework, combined with concentration inequalities for sub-Gaussian chaoses in Banach spaces.
If time permits, we will also present corresponding minimax lower bounds for the estimation of eigenprojectors. These are obtained by a van Trees (resp. Cramér-Rao) inequality for invariant statistical models.
- Thursday 22.10.2020, 1pm, Webex
Lutz Dümbgen (University of Bern), Shape-Constrained Distributional Regression – Stochastic and Likelihood Ratio Order
Abstract: We consider nonparametric bivariate regression with generic observations (X,Y). A possible and often natural assumption is that the conditional distribution of Y, given that X = x, is “increasing” in x. A standard notion of “increasing” would be the usual stochastic order, and we present estimators and asymptotic properties for that setting. A stronger notion of order is the likelihood ratio order which is well-known from mathematical statistics and binary classification. We review this property briefly but in full generality and describe its relation to so-called multivariate total positivity of order 2 (MTP2). Then we present an algorithm to estimate the joint distribution of (X,Y) from empirical data under the sole assumption that the conditional distribution of Y, given that X = x, is increasing in x with respect to likelihood ratio order.
This is joint work with Alexandre Mösching (Bern, Göttingen).
- Thursday 15.10.2020, 1pm, Webex
Ismael Castillo (LPSM), Supremum-norm inference with Bayesian CART
Abstract: This paper affords new insights about Bayesian CART in the context of structured wavelet shrinkage. We show that practically used Bayesian CART priors lead to adaptive rate-minimax posterior concentration in the supremum norm in Gaussian white noise, performing optimally up to a logarithmic factor. To further explore the benefits of structured shrinkage, we propose the g-prior for trees, which departs from the typical wavelet product priors by harnessing correlation induced by the tree topology. Building on supremum norm adaptation, an adaptive non-parametric Bernstein-von Mises theorem for Bayesian CART is derived using multiscale techniques. For the fundamental goal of uncertainty quantification, we construct adaptive confidence bands with uniform coverage for the regression function under self-similarity.
This is joint work with Veronika Rockova (Chicago)
- Thursday 08.10.2020, 2pm, MSA 4.530
Emmanuel Rio (University of Versailles), About the constants in the deviation inequalities for martingales
Abstract: In this talk, we will give deviations inequalities for martingales or sums of independent random variables. We will start by giving some constants in Fuk-Nagaev type inequalities. Next we will give an other approach, which allows to give more precise results in the case of martingales with finite third moments. Finally we apply estimates of the minimal distances in the central limit theorem to get upper bounds for the tail-quantiles of sums of independent random variables.
- Thursday 01.10.2020, 1pm, Webex
Gérard Biau (Sorbonne Université), Theoretical Insights into Wasserstein GANs
Abstract: Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation. Building on these successes, a large number of empirical studies have validated the benefits of the cousin approach called Wasserstein GANs (WGANs), which brings stabilization in the training process. In the present contribution, we add a new stone to the edifice by proposing some theoretical advances in the properties of WGANs. First, we properly define the architecture of WGANs in the context of integral probability metrics parameterized by neural networks and highlight some of their basic mathematical features. We stress in particular interesting optimization properties arising from the use of a parametric 1-Lipschitz discriminator. Then, in a statistically-driven approach, we study the convergence of empirical WGANs as the sample size tends to infinity, and clarify the adversarial effects of the generator and the discriminator by underlining some trade-off properties. These features are finally illustrated with experiments using both synthetic and real-world datasets.
- Thursday 17.09.2020, 2pm, Webex
Vincent Rivoirard (CEREMADE, Université Paris Dauphine), Nonparametric inference for Hawkes processes
Abstract: Hawkes processes are widely applied to event-type data with complex dependencies on the past of the process. They are particularly used in seismology, neuroscience, genetics and social network analysis. The goal of this talk is to present recent advances for nonparametric inference for multivariate Hawkes processes. In the fist part of this talk, frequentist estimation of Hawkes parameters by using Lasso-type estimators is described. Then, the Bayesian setting is considered. Concentration rates for the posterior distribution under reasonable assumptions on the prior distribution are established, first for linear multivariate Hawkes models, then for nonlinear ones. We also present a simulation study to illustrate our results and to study empirically the inference on functional connectivity graphs of neurons.