Estimating and computing entropies of probability distributions are key computational tasks throughout data science. In many situations, the underlying distributions are only known through the expectation of some feature vectors, which has led to a series of works within kernel methods. In this talk, I will explore the particular situation where the feature vector is a rank-one positive definite matrix, and show how the associated expectations (a covariance matrix) can be used with information divergences from quantum information theory to draw direct links with the classical notions of Shannon entropies.
Is hydrodynamics capable of performing computations? (Moore 1991). Can a mechanical system (including a fluid flow) simulate a universal Turing machine? (Tao, 2016). Etnyre and Ghrist unveiled a mirror between contact geometry and fluid dynamics reflecting Reeb vector fields as Beltrami vector fields. With the aid of this mirror, we can answer in the positive the questions raised by Moore and Tao. This is a recent result that mixes up techniques from Alan Turing with modern Geometry (contact geometry) to construct a « Fluid computer » in dimension 3. This construction shows, in particular, the existence of undecidable fluid paths. I will also explain applications of this mirror to the detection of escape trajectories in Celestial mechanics (for which I’ll need to extend the mirror to a singular set up). This mirror allows us to construct a tunnel connecting problems in Celestial mechanics and Fluid Dynamics.
In this talk, we will see how statistical methods, from the simplest to the most advanced ones, can be used to address various problems in medical image processing and reconstruction for different imaging modalities. Image reconstruction allows obtaining the images in question, while image processing (on the already reconstructed images) aims at extracting some information of interest. We will review several statistical methods (mainly Bayesian) to address various problems of this type.
The notion of transverse Poisson structure was introduced by Arthur Weinstein, stating in his famous splitting theorem that any Poisson manifold MMM is, in the neighborhood of each point mmm, the product of a symplectic manifold, the symplectic leaf SSS at mmm, and a submanifold NNN which can be endowed with a structure of Poisson manifold of rank 0 at mmm. NNN is called a transverse slice at MMM of SSS. When MMM is the dual of a complex Lie algebra g\mathfrak{g}g equipped with its standard Lie-Poisson structure, we know that the symplectic leaf through xxx is the coadjoint G⋅xG \cdot xG⋅x of the adjoint Lie group GGG of g\mathfrak{g}g. Moreover, there is a natural way to describe the transverse slice to the coadjoint orbit, and using a canonical system of linear coordinates (q1,…,qk)(q_1, \dots, q_k)(q1,…,qk), it follows that the coefficients of the transverse Poisson structure are rational in (q1,…,qk)(q_1, \dots, q_k)(q1,…,qk).
Gibbs manifolds are images of affine spaces of symmetric matrices under the exponential map. They arise in applications such as optimization, statistics, and quantum physics, where they extend the ubiquitous role of toric geometry. The Gibbs variety is the zero locus of all polynomials that vanish on the Gibbs manifold. This lecture provides an introduction to these objects from the perspective of Algebraic Statistics.
The last decade has seen the emergence of learning techniques that use the computational power of dynamical systems for information processing. Some of these paradigms are based on architectures that are partially randomly generated and require a relatively cheap training effort, making them ideal for many applications. The need for a mathematical understanding of the working principles underlying this approach, collectively known as Reservoir Computing, has led to the construction of new techniques that combine well-known results in systems theory and dynamics with others from approximation and statistical learning theory. This combination has recently elevated Reservoir Computing to the realm of provable machine learning paradigms and, as we will see in this talk, it also reveals various connections with kernel maps, structure-preserving algorithms, and physics-inspired learning.