Probability & Statistics Seminar
- Home >
- Probability & Statistics Seminar
Upcoming sessions :
- Thursday 10.10.2024, 13h, MNO 1.020
Charles-Philippe Manuel Diez (Université du Luxembourg), Introduction to free probability
Abstract: In this talk we will introduce the concept of “free probability”, a theory developed by Voiculescu in the early 80s. We will introduce the notion of a non-commutative probability space with some examples, and the glossary between classical and free probability. We will then introduce the notion of “freeness”, which is the free analogue of classical (tensor) independence, and which was the initial motivation for Voiculescu to understand the structure of special von Neumann algebras called free group factors. We will then explore the analytical aspect of Voiculescu and the combinatorial structure of free probability via the lattice non-crossing partitions discovered by Speicher. We will also present a deep and surprising connection to random matrix theory discovered by Voiculescu in 1991. This latter result was of profound importance in proving breakthrough results in the world of operator algebras, but also in the development of random matrix theory. Finally, if time permits, we will present some of these important results in a heuristic way by introducing the mathematical microstates approach to free entropy.
- Thursday 17.10.2024, 13h, MNO 1.010
Grégoire Valentin Michel Szymanski (Université du Luxembourg), Statistical inference for rough volatility
Abstract: Rough volatility models have emerged as a powerful framework to capture the intricate dynamics and irregularities of financial markets. These models, characterized by fractional Brownian motion (fBM) with a Hurst parameter H < 1/2, provide an effective description of the high-frequency, rough behavior of stochastic volatility. In this presentation, we offer a comprehensive overview of three distinct contributions that tackle various facets of the challenging problem of estimating the Hurst parameter H. In this talk, we review the methodology proposed in Volatility is Rough to quantify the roughness of the volatility process. We discuss its implications from a financial perspective and address the statistical limitations inherent to this approach. We focus on the estimation of H from discrete price observations within a semi-parametric setting, without assuming any predefined relationship between volatility estimators and true volatility. Our approach achieves the optimal minimax rate of convergence for parametric rough volatility models. Specifically, we show that the convergence rate for estimating H can reach n^{-1/(4H+2)} for small values of H.
- Thursday 24.10.2024, 13h30, MNO 1.010
Lorenzo Cristofaro (Université du Luxembourg), A Class of non-Gaussian Measures and Related Analysis
Abstract: During the last decades infinite-dimensional analysis has been developed through the use of non-Gaussian analysis. Indeed, the tools of White Noise Analysis have been generalized for non-Gaussian measures to obtain notions and characterizations similar to Gaussian Analysis. In this talk, we present new results about the use of generalized Wright functions as characteristic functional, the cases of the associated non-Gaussian measures and their properties.
- Thursday 31.10.2024, 13h30, MNO 1.010
Guillaume Maillard (ENSAI), A model-based approach to density estimation in sup-norm
Abstract: We define a general method for finding a quasi-best approximate in sup-norm of a target density belonging to a given model, based on independent samples drawn from distributions which average to the target (which does not necessarily belong to the model). We also provide a general method for selecting among a countable family of such models. These estimators satisfy oracles inequalities in the general setting. The quality of the bounds depends on the volume of sets on which | p-q | is close to its maximum, where p,q belong to the model (or possibly, to two different models, in the case of model selection). This leads to optimal results in a number of settings, including piecewise polynomials on a given partition and anisotropic smoothness classes. Particularly interesting is the case of the single index model with fixed smoothness alpha, where we recover the one-dimensional rate: this was an open problem.
- Thursday 14.11.2024, 13h30, MNO 1.010
Gabriel Romon (Université du Luxembourg), Some estimators of location in a finite metric tree
Abstract: During this talk we discuss parameters of central tendency for a population on a network, which is modeled by a finite metric tree. In this non-Euclidean setting, we develop location parameters called generalized Fréchet means, which are obtained by replacing the usual objective function α ↦ E[d(α,X)²] with α ↦ E[ℓ(d(α,X))], where ℓ is a generic convex nondecreasing loss function. We develop a notion of directional derivative in the tree, which helps up locate and characterize the minimizers. Estimation is performed using a sample analog. We extend to a finite metric tree the notion of stickiness defined by Hotz et al. (2013), we show that this phenomenon has a non-asymptotic component and we obtain a sticky law of large numbers. For the particular case of the Fréchet median, we develop non-asymptotic concentration bounds and sticky central limit theorems.
- Monday 18.11.2024, 13h, MNO 1.040
Jose Ameijeiras-Alonso (Universidade de Santiago de Compostela), Modern Directional Smoothing: Advances in Circular Kernel Density Estimation
Abstract: This talk explores innovative techniques for analysing circular data, such as angles or time over a 24-hour period, and the unique challenges this type of data presents for density estimation. We focus on circular kernel density estimation, highlighting the crucial role of selecting the smoothing parameter. The approach introduces a novel, data-driven method tailored for circular data, utilizing plug-in techniques to reliably estimate unknown quantities. Building upon well-established methods, particularly those proposed by Sheather and Jones for data-driven bandwidth selection in linear data, we adapt them to the circular setting through direct and solve-the-equation plug-in rules. We derive the asymptotic mean integrated squared error of the density estimator and its derivatives, providing key insights into the method’s theoretical foundation. Through extensive simulations, we validate the effectiveness of our smoothing parameter selectors, comparing them against existing methods. Finally, we demonstrate the practical application of our plug-in rules with a real data example.
- Thursday 21.11.2024, 13h30, MNO 1.010
Sven Wang (Humboldt-Universität zu Berlin), M-estimation and statistical learning of neural operators
Abstract: We present statistical convergence results for the learning of mappings in infinite-dimensional spaces. Given a possibly nonlinear map between two separable Hilbert spaces, we analyze the problem of recovering the map from noisy input-output pairscorrupted by i.i.d. white noise processes or subgaussian random variables. We provide a general convergence results for least-squares-type empirical risk minimizers over compact regression classes, in terms of their approximation properties and metric entropy bounds, proved using empirical process theory. This extends classical results in finite-dimensional nonparametric regression to an infinite-dimensional setting. As a concrete application, we study an encoder-decoder based neural operator architecture. Assuming holomorphy of the operator, we prove algebraic (in the sample size) convergence rates in this setting, thereby overcoming the curse of dimensionality. To illustrate the wide applicability of our results, we discuss a parametric Darcy-flow problem on the torus.
- Thursday 5.12.2024, 13h30, MNO 1.010
Alba García-Ruiz (Universidad Autónoma de Madrid, ICMAT), TBA
Abstract:TBA
- Thursday 19.12.2024, 13h30, MNO 1.010
Jack Hale (Université du Luxembourg), TBA
Abstract:TBA