seminars
Our group host seminars on a regular monthly basis, either in person or remotely. To get the latest anouncements, you can subscribe to the seminar mailing list cran-simul-seminars@univ-lorraine.fr as follows:
- to subscribe, send a blank email to sympa@univ-lorraine.fr with object SUBSCRIBE cran-simul-seminars. A admin will review and validate your request.
- to unsubscribe, send a blank email to sympa@univ-lorraine.fr with object UNSUBSCRIBE cran-simul-seminars. A admin will review and validate your request.
Calendar
The calendar below includes internal team meetings (GT SiMul) and seminars from invited speakers.
Upcoming seminars
There are no upcoming seminars at the moment. Please check back later.
Past seminars
Pardis Semnani (University of British Columbia, Vancouver, Canada)
July, 22th 2024, 14h00-15h00
Homaloidal polynomials and Gaussian models of maximum likelihood degree one
We study the Gaussian statistical models whose log-likelihood function has a unique complex critical point, i.e., has maximum likelihood degree one. We exploit the connection developed by Améndola et. al. between the models having maximum likelihood degree one and homaloidal polynomials. We study the spanning tree generating function of a graph and show this polynomial is homaloidal when the graph is chordal. When the graph is a cycle on n vertices, n≥4, we prove the polynomial is not homaloidal, and show that the maximum likelihood degree of the resulting model is the nth Eulerian number. These results support our conjecture that the spanning tree generating function is a homaloidal polynomial if and only if the graph is chordal. We also provide an algebraic formulation for the defining equations of these models. Using existing results, we provide a computational study on constructing new families of homaloidal polynomials. In the end, we analyze the symmetric determinantal representation of such polynomials and provide an upper bound on the size of the matrices involved.
Helena Calatrava (Northeastern University, Boston, USA)
July, 16th 2024, 11h00-12h00
GNSS Signal Processing for Precise and Robust Positioning
In this talk, we will explore two methodologies designed to enhance the performance of Global Navigation Satellite Systems (GNSS): collaborative positioning techniques to improve positioning accuracy and robust signal processing to enhance resilience against jamming attacks. First, we will introduce the Massive User-Centric Single Difference (MUCSD) algorithm, which enhances GNSS accuracy through user collaboration. MUCSD leverages a network of receivers exchanging observables and noisy estimates of position and clock bias. Implemented as an iterative weighted least squares (WLS) estimator, MUCSD achieves a performance comparable to Differential GNSS (DGNSS) without the need for costly reference stations. Simulation results demonstrate that MUCSD outperforms DGNSS as the number of collaborative receivers increases, showcasing its scalability. Next, we will discuss the Robust Interference Mitigation (RIM) framework for snapshot architectures, addressing interferences such as continuous wave and chirp jamming signals. While studies on RIM typically assume the number of quantization bits allows for full signal representation, our study examines the impact of low quantization bits on baseline snapshot receiver performance in the presence of interference. Additionally, we will analyze the effect of quantization on the median absolute deviation (MAD) robust measure of statistical dispersion. By combining collaborative positioning techniques and robust interference mitigation, this talk will showcase advancements that improve GNSS precision and resilience.
Ivan Yakushev (ENSAD, Nancy)
June, 19th 2024, 10h30-11h30
Application of machine learning in visual art
This presentation will explore the current state of using local diffusion models in video and image generation, the interfaces and workflows commonly utilized in this area. In addition to highlighting the current capabilities of these models, we will also discuss their limitations and potential opportunities for future research and multidisciplinary collaboration.
Konstantin Usevich (CRAN)
June, 10th 2024, 14h-15h
Algebraic Algorithms for the ParaTuck-2 Decomposition
ParaTuck-2 decomposition (PT2D) of 3-rd order tensors is a 2-level extension of the well-known CP (canonical polyadic) decomposition (CPD). It is relevant in several applications, such as chemometrics, telecommunications, and machine learning. As shown in (Harshman, Lundy, 1996), the PT2D enjoys strong uniqueness properties (up to scaling/permutation ambiguities, similarly to the CPD). However, there are very few results on theory and algorithms for the PT2D. In particular, common strategies, such as the alternating least squares, suffer from convergence and initialization issues. We propose an algebraic algorithm for the PT2D decomposition in the case when the ParaTuck-2 ranks are smaller than the frontal dimensions of the tensor. Our approach relies only on linear algebra operations and is based on finding the kernel of a structured matrix constructed from the tensor. It refines the previously known identifiability conditions. Yet another algorithm is proposed for the symmetric case, which appears in the implicit approach to the PARAFAC-2 model.
Vicente Zarzoso (Université Côte d'Azur, Nice)
April, 15th 2024, 15h-16h
Tensor decomposition of ECG records for persistent atrial fibrillation analysis
Considered as the last great frontier of cardiac electrophysiology, atrial fibrillation (AF) is the most common sustained arrhythmia encountered in clinical practice, responsible for high hospitalization rates and a significant proportion of brain strokes in the Western world. Analyzing AF electrophysiological complexity noninvasively requires the extraction of the atrial activity (AA) signal from the electrocardiogram (ECG). To perform this task, most approaches including classical average beat subtraction need sufficiently long ECG records, thus limiting real-time analysis. Matrix factorizations can also be used for AA signal estimation by exploiting the spatial diversity of the multi-lead ECG, but require some constraints to guarantee uniqueness that may lack physiological grounds and hinder results interpretation.
This talk will review recent results obtained at the I3S Laboratory, UMR 7271, Université Côte d'Azur, CNRS, on tensor decompositions for noninvasive AA signal extraction in AF ECGs, which guarantee uniqueness under milder constraints on their factors. Specifically, the block term decomposition (BTD) has been shown to be particularly suitable to address this biomedical problem, as atrial and ventricular cardiac activity sources can be modeled by matrices with special structure. The structure of these matrices ensures model uniqueness while their rank is linked to signal complexity. In this framework, we have put forward the Hankel and Löwner BTD as AA extraction tools in AF ECG episodes, with validation in a population of persistent AF patients and several challenging types of ECG segments, including short beat-to-beat intervals and low-amplitude fibrillatory waves. Accurate AA extraction can be achieved from ECG segments as short as a single heartbeat. We have also developed a robust computational algorithm - the so-called alternating group lasso BTD (BTD-AGL) - to simultaneously recover the model structure (number of block terms and multilinear rank of each term) and the model factors. In addition, tensor modeling allows us to derive a novel index to quantify AF complexity nonivasively, useful to characterize stepwise catheter ablation, a first-line therapeutic option for the treatment of persistent forms of the arrhythmia. The index correlates with the expected decrease in AF complexity over ablation steps and is predictive of AF recurrence, which presents clear clinical interest.
This talk will review recent results obtained at the I3S Laboratory, UMR 7271, Université Côte d'Azur, CNRS, on tensor decompositions for noninvasive AA signal extraction in AF ECGs, which guarantee uniqueness under milder constraints on their factors. Specifically, the block term decomposition (BTD) has been shown to be particularly suitable to address this biomedical problem, as atrial and ventricular cardiac activity sources can be modeled by matrices with special structure. The structure of these matrices ensures model uniqueness while their rank is linked to signal complexity. In this framework, we have put forward the Hankel and Löwner BTD as AA extraction tools in AF ECG episodes, with validation in a population of persistent AF patients and several challenging types of ECG segments, including short beat-to-beat intervals and low-amplitude fibrillatory waves. Accurate AA extraction can be achieved from ECG segments as short as a single heartbeat. We have also developed a robust computational algorithm - the so-called alternating group lasso BTD (BTD-AGL) - to simultaneously recover the model structure (number of block terms and multilinear rank of each term) and the model factors. In addition, tensor modeling allows us to derive a novel index to quantify AF complexity nonivasively, useful to characterize stepwise catheter ablation, a first-line therapeutic option for the treatment of persistent forms of the arrhythmia. The index correlates with the expected decrease in AF complexity over ablation steps and is predictive of AF recurrence, which presents clear clinical interest.
Paul Catala (Helmholtz Munich & TUM, Germany)
March, 25th 2024, 14h-15h
An Approximate Joint Diagonalization Algorithm for Off-the-Grid Sparse Recovery
Many problems in imaging and data science require to reconstruct, from partial observations, highly concentrated signals, such as pointwise sources or contour lines. This work introduces a novel algorithm for recovering measures supported on such structured domains, given a finite number of their moments. Our approach is based on the traditional singular value decomposition methodology of subspace methods, but lifts their restriction to the framework of Dirac masses, and is able to recover geometrically faithful discrete approximations of measures with density. The crucial step consists in the approximate joint diagonalization of a few non-commuting matrices, which we perform using a quasi-Newton algorithm. Experiments show that our method performs well, not only in the setting of well separated Dirac masses, as predicted by the standard theory of the truncated moment problem, but also in the case of continuous measures, which is not covered by theoretical guarantees and where usual methods empirically fail. We illustrate its applicability in optimal transport problems, where the coupling measure is often localized on the graph of some function.
Marc Offroy (LIEC, Université de Lorraine)
March, 18th 2024, 10h-11h
Extraction de signatures spectrales en imagerie Raman par des outils de chimiométrie pour la caractérisation moléculaire d’un échantillon archéologique complexe : un fragment de mosaïque datant de la période de l’oppidum (période Romaine de la seconde moitié du IIe siècle avant J.-C. à la fin du Ier siècle après J.C.)
La chimie analytique sur des artefacts archéologiques est une partie essentielle des recherches en archéologie modernes et, d'année en année, l'amélioration des instruments a permis de générer des données à une fréquence spatiale et temporelle élevée. En particulier, l'imagerie spectrale Raman peut être appliquée avec succès à la recherche en archéologie en raison de sa simplicité de mise en œuvre, afin d'étudier les sociétés humaines du passé par l'analyse de leurs vestiges provenant de fouilles. Cette technique spectrale permet d'obtenir simultanément des informations spatiales et spectrales en préservant l'intégrité de l'échantillon. Cependant, en raison de la complexité inhérente des échantillons en archéologie (ancienneté, fragilité, manque ou absence totale d'informations sur leur composition), l'interprétation chimique peut s'avérer complexe. Des problèmes spécifiques de sélectivité spectrale liés à des composés chimiques inattendus, peuvent apparaître en raison de leur condition de conservation. En outre, la détection des composés mineurs devient difficile car les composés majeurs imposent leurs contributions dans les spectres acquis. Il est donc important de développer de nouvelles approches chimiométriques afin de surmonter ces inconvénients et de découvrir ainsi toutes les informations chimiques réelles contenues dans l'ensemble des données spectrales acquises. Dans le cadre des progrès constants dans le développement d'outils mathématiques et statistiques performants, une approche chimiométrique pertinente a été introduite dans ce contexte. Cette approche vise à extraire des sources spectrales distinctes d’un jeu de données en imagerie Raman sur un échantillon archéologique : un fragment de mosaïque. L'objectif est d'extraire des informations spectrales sélectives par l'analyse du regroupement de pixels afin d'améliorer l'étape d'optimisation initiale au sein de l'algorithme MCR-ALS (Multivariate Curve Resolution and Alternating Least-Squares), une technique bien connue de démélange des signaux. Le principe sous-jacent de l'algorithme MCR-ALS est que les spectres acquis sont considérés comme des combinaisons linéaires de spectres « purs » de tous les composés chimiques individuels présents dans le système étudié. Il est parfois difficile d'obtenir les résultats souhaités par le biais de l'algorithme, en particulier si les estimations initiales des profils spectraux ou de concentrations sont inexactes en raison de signaux complexes, de bruit dans les données ou encore d'un manque de sélectivité spectrale, ce qui entraîne une déficience de rang (c'est-à-dire une mauvaise estimation du nombre total de signaux « purs »). C'est pourquoi une approche basée sur des regroupements de pixels, combiné à de multiples approches de projection orthogonale (OPA) sur les spectres, a été mise au point pour améliorer l'estimation du rang de la matrice de départ, et donc, l'étape d'initialisation de l'approche MCR-ALS avant l'optimisation.
Linus Bleistein (Inria Paris)
March, 11th 2024, 14h-15h
Dynamical Survival Analysis with Controlled Latent States
We consider the task of learning individual specific intensities of survival processes from static and longitudinal data. Modeling the intensities as solutions to non-parametric unknown differential equations allows us to provide a precise bias-variance decomposition of a signature-based estimator. This estimator yields excellent performance on a large array of both simulated and real datasets from finance, predictive maintenance and churn prediction.
Valentin Leplat (SkolTech, Russia)
February, 9th 2024, 14h-15h
Introduction to Deep Nonnegative Matrix Factorization and Stochastic Optimization with heavy-tails
Part 1: Deep Nonnegative Matrix Factorization with β-Divergences
Our first topic revolves around the Deep Nonnegative Matrix Factorization (deep NMF), a novel and promising facet of unsupervised learning. Deep NMF has emerged as a potent technique for extracting multi-layered features spanning various scales. However, conventional deep NMF models have primarily relied on the least squares error as their evaluation metric, which may not be the most suitable gauge for assessing the quality of approximations across diverse datasets. For data types such as audio signals and documents, β-divergences have gained recognition as a more fitting alternative. In this seminar, we present new models and algorithms that harness β-divergences to enhance deep NMF, with an emphasis on the notion of identifiability.
Part 2: Heavy-Tailed Stochastic Optimization for Deep Neural Networks
Our second topic concerns stochastic optimization, with a particular focus on recent discoveries concerning the nature of stochastic gradient noise in deep neural network training. Contrary to the conventional assumption of Gaussian noise, empirical evidences show that gradient noise often exhibits heavy-tailed characteristics. We introduce an efficient mechanism for optimizers to handle this noise behavior. Additionally, we showcase an extension of our recently introduced stochastic optimizer, referred to as NAG-GS, specifically tailored for the training of Vision Transformers.
Our first topic revolves around the Deep Nonnegative Matrix Factorization (deep NMF), a novel and promising facet of unsupervised learning. Deep NMF has emerged as a potent technique for extracting multi-layered features spanning various scales. However, conventional deep NMF models have primarily relied on the least squares error as their evaluation metric, which may not be the most suitable gauge for assessing the quality of approximations across diverse datasets. For data types such as audio signals and documents, β-divergences have gained recognition as a more fitting alternative. In this seminar, we present new models and algorithms that harness β-divergences to enhance deep NMF, with an emphasis on the notion of identifiability.
Part 2: Heavy-Tailed Stochastic Optimization for Deep Neural Networks
Our second topic concerns stochastic optimization, with a particular focus on recent discoveries concerning the nature of stochastic gradient noise in deep neural network training. Contrary to the conventional assumption of Gaussian noise, empirical evidences show that gradient noise often exhibits heavy-tailed characteristics. We introduce an efficient mechanism for optimizers to handle this noise behavior. Additionally, we showcase an extension of our recently introduced stochastic optimizer, referred to as NAG-GS, specifically tailored for the training of Vision Transformers.
Shuyu Dong (INRIA Saclay)
January, 19th 2024, 15h-16h
Low-rank matrix/tensor decomposition methods: applications to data completion and causal structure learning
Matrix and tensor decomposition plays a crucial role in addressing various real-world problems related to topics such as statistical inference, data acquisition, and data restoration. In this talk, we start with low-rank matrix/tensor models for the data completion problem [1,2]. We tackle this problem in the framework of low-rank matrix/tensor decomposition with a least-squares model. These rank-constrained problems are known not only for their low computational complexity but also the capability of extracting the most important information in the data. We discuss a type of Riemannian gradient-based algorithms that exploit the structure of these rank-constrained models. Secondly, we present a novel application of low-rank matrix methods in the context of causal structure learning. We will show how low-rank matrix decomposition, in combination with a sparse mask operator, can be used to efficiently find directed acyclic graphs (DAGs) proximal to a given graph (with cycles). Furthermore, for learning causal DAGs from observational data, we present a sparse matrix decomposition method [4] and discuss its efficiency through experiments on synthetic and real-world data. [1] S. Dong, P.-A. Absil, and K. A. Gallivan, Riemannian gradient descent methods for graph-regularized matrix completion. Linear Algebra and its Applications 623 (2021), 193-235 [2] S. Dong, B. Gao, Y. Guan, and F. Glineur, New Riemannian preconditioned algorithms for tensor completion via polyadic decomposition, SIAM Journal on Matrix Analysis and Applications 43 (2) (2022), 840-866 [3] S. Dong and M. Sebag, From graphs to DAGs: a low-complexity model and a scalable algorithm, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), 2022 [4] S. Dong, K. Uemura, A. Fujii, S. Chang, Y. Koyanagi, K. Maruhashi, and M. Sebag, Learning large causal structures from inverse covariance matrix via matrix decomposition, arXiv preprint arXiv:2211.14221, 2023
Xavier Luciani (University of Toulon)
October, 5th 2023, 13h-14h
Décomposition Canonique Polyadique et Diagonalisation Conjointe de matrices par Similitude : algorithmes et applications
En traitement du signal, la Décomposition Canonique Polyadique (DCP) consiste à décomposer un tableau multidimensionnel (appelé ici tenseur) en une combinaison multilinéaire d'un minimum de facteurs, comprenant généralement les signaux d'intérêt. Cette approche est donc couramment utilisée en séparation de sources, identification de mélange et plus généralement pour la résolution de problèmes inverses. Par ailleurs, de nombreux liens ont été établis entre la DCP et le problème de diagonalisation conjointe de matrices, que l'on retrouve également au cœur de nombreuses méthodes de séparation de sources. Nous montrerons dans la première partie de cette présentation comment réécrire la DCP sous la forme d'une Diagonalisation Conjointe de matrices par Similitude (DCS) afin d'en dériver un algorithme de calcul efficace. Nous présenterons alors plusieurs familles d'algorithmes de DCS permettant notamment de traiter le cas de signaux à valeurs complexes ou de tenir compte de contraintes de non négativité. Dans la seconde partie, nous nous placerons dans le contexte applicatif de la spectroscopie de fluorescence pour introduire un autre algorithme de DCP permettant cette fois la mise à jour des facteurs de la décomposition (et de leur nombre) au fil de l'acquisition de nouveaux signaux. Nous verrons que ces deux algorithmes très différents dans leur principe ont pour point commun une certaine résistance à la surestimation du nombre de facteurs de la DCP. Enfin nous conclurons cette présentation en évoquant nos travaux actuels en collaboration avec le CRAN et consistant à étendre les notions de DCP et de DCS aux tenseurs et matrices dont les éléments appartiennent à l'algèbre des quaternions.
Nuha Diab (Tel Aviv University, Israel)
September, 26th 2023, 14h-14h40
Optimal super-resolution of close point sources and stability of Prony's method
We consider the problem of recovering a linear combination of Dirac masses from noisy Fourier samples, also known as the problem of super-resolution. Following recent derivation of min-max bounds for this problem when some of the sources collide, we develop an optimal algorithm which provably achieves these bounds in such a challenging scenario. Our method is based on the well-known Prony's method for exponential fitting, and a novel analysis of its stability in the near-colliding regime, combined with the decimation technique for improving the conditioning of the problem.
Joppe De Jonghe (KU Leuven, Belgium)
April, 6th 2023, 14h-15h
Learning non-linearities in the two layer decoupling problem with an application to neural network compression
Methods for the decoupling of multivariate functions have been developed in order to determine the parameters and internal representations of non-linear static components in block-oriented system identification. These methods solve the single layer decoupling problem, whose solution has a natural interpretation as a neural network with a single hidden layer with flexible activation functions in the neurons. As a result, these methods have been used to compress neural (sub)networks. However, currently only compression to a single hidden layer is well understood but more complex (sub)networks may require more flexibility in the number of hidden layers. Providing compression to more than one hidden layer corresponds to approximating a solution of a multi-layer decoupling problem. In this talk I will shortly describe the single layer decoupling problem and why multi-layer decoupling is relevant, more specifically forneural network compression. Next, I will present the two layer decoupling problem as well as a solution strategy. In addition, two algorithms for approximating a solution will be discussed and described conceptually.
Khazhgali Kozhasov (TU Braunschweig, Germany)
December, 6th 2022, 14h-15h
Real aspects of the problem of rank-one approximation
Let us consider the problem of approximating a real tensor T (of a given format) by a rank-one tensor T_1 that minimizes (the Frobenius) norm ||T-S_1|| among all rank-one tensors S_1. A tensor T_1 is, in particular, a critical point of the square of the distance function dist_T: S_1 -> ||T-S_1||^2 on the manifold X of rank-one tensors. The largest possible number N of critical points of dist_T among all generic T can be interpreted as a measure of complexity of the rank-one approximation problem. I will discuss a bound on N due to Friedland and Ottaviani and will explain a technique that has been successfully used to determine a sharp bound on the number of symmetric critical points of dist_T for a symmetric tensor T. If time permits, I will discuss tensors that have worst rank-one approximation error and mention a recent result that roughly means that symmetric tensors are as far from being of rank one as general tensors.
Lorena León (IRIT, University of Toulouse)
November, 17th 2022, 14h-15h
Bayesian Multivariate Multifractal Analysis with application to Drowsiness Detection
Multifractal analysis has become a reference tool for signal and image processing. Grounded in the quantification of local regularity fluctuations, it has proven useful in an increasing range of applications, yet so far involving only univariate data (scalar valued time series or single channel images). Recently the theoretical ground for multivariate multifractal analysis has been devised, showing potential for quantifying transient higher-order dependence beyond linear correlation among collections of data. However, the accurate estimation of the parameters associated with a multivariate multifractal model remains challenging, especially for small sample size data. This work studies an original Bayesian framework for multivariate multifractal estimation, combining a novel and generic multivariate statistical model, a Whittle-based likelihood approximation and a data augmentation strategy allowing parameter separability. This careful design enables efficient estimation procedures to be constructed for two relevant choices of priors using a Gibbs sampling strategy. Monte Carlo simulations, conducted on synthetic multivariate signals and images with various sample sizes and multifractal parameter settings, demonstrate significant performance improvements over the state of the art, at only moderately larger computational cost. Moreover, we show the relevance of the proposed framework for real-world data modeling in the important application of drowsiness detection from multichannel physiological signals.
Jonathan Gillard (Cardiff University)
November, 10th 2022, 14h-15h
Low-rank methods for time series analysis.
This talk will describe some classic and recent results on low-rank methods for problems of time series analysis. The first typical and fundamental step to enable low-rank methods in this setting is to embed a time series into a Hankel matrix; low-rank approximations of this matrix which maintain the Hankel structure have meaning for classical problems such as approximation, de-noising, and forecasting. This claim will be justified in the talk and is part of a broader field known as structured low-rank approximation (SLRA). We discuss some results in SLRA before describing recent work on the nuclear norm convex relaxation of the rank minimization of Hankel matrices for forecasting, which gives rise to interesting theory and much potential for application, and can be viewed as a particular problem of matrix completion.
Barbara Pascal (CRIStAL, Lille)
July, 12th 2022, 10h30-11h30
The Kravchuk transform: a novel covariant representation for discrete signals amenable to zero-based detection tests.
Recent works in time-frequency analysis proposed to switch the focus from the maxima of the spectrogram toward its zeros, which form a random point pattern with a very stable structure. Several signal processing tasks, such as component disentanglement and signal detection procedures, have already been renewed by using modern spatial statistics onthe pattern of zeros. Tough, they require cautious choice of both the discretization strategy and the observation window in the time-frequency plane. To overcome these limitations, we propose a generalized time-frequency representation: the Kravchuk transform, especially designed for discrete signals analysis, whose phase space is the unit sphere, particularly amenable to spatial statistics. We show that it has all desired properties for signal processing, among which covariance, invertibility and symmetry, and that the point process of the zeros of the Kravchuk transform of complex white Gaussian noise coincides with the zeros of the spherical Gaussian Analytic Function. Elaborating on this theorem, we finally develop a Monte Carlo envelope test procedure for signal detection based on the spatial statistics of the zeros of the Kravchuk spectrogram. After reviewing the unorthodox path focusing on the zeros of the standard spectrogram and the associated theoretical results on the distribution of zeros in the case of white noise, I will introduce the Kravchuk transform and study the random point process of its zeros from a spatial statistics perspective. Then I will present the designed Monte Carlo envelop test, and illustrate its numerical performance in adversarial settings, with both low signal-to-noise ratio and small number of samples, and compare it to state-of-the-art zeros-based detection procedures.
Tulay Adali (University of Maryland, Baltimore County)
May, 30th 2022, 13h30-14h30
Independent Component and Vector Analyses
In many fields today, such as neuroscience, remote sensing, computational social science, and physical sciences, multiple sets of data are readily available. Matrix and tensor factorizations enable joint analysis, i.e., fusion, of these multiple datasets such that they can fully interact and inform each other while also minimizing the assumptions placed on their inherent relationships. A key advantage of these methods is the direct interpretability of their results. This talk presents an overview of models based on independent component analysis (ICA), and its generalization to multiple datasets, independent vector analysis (IVA) with examples in fusion of neuroimaging data. Relationship of IVA to other methods such as multiset canonical correlation analysis (MCCA) is discussed, and a number of important directions of research are addressed, along with the challenges.
Raphael Mignot (IECL, Nancy)
May, 05th 2022, 10h30-11h30
Barycentres de séries temporelles : une nouvelle approche basée sur la méthode de la signature
La méthode de la signature a été largement utilisée pour l'analyse des séries temporelles multivariées. Cette approche a prouvé son efficacité pour de nombreuses applications en apprentissage statistique. La définition d'une notion de barycentre dans l'espace des signatures est un premier pas prometteur permettant de développer de nouvelles extensions de l'analyse en composantes principales (ACP) ou de l'algorithme des k-moyennes aux séries temporelles.
Rima Khouja (INRIA, Sophia Antipolis)
April, 28th 2022, 10h-11h
Riemannian Newton optimization methods for the symmetric tensor approximation problem
Tensors are higher order generalization of matrices. They appear in a myriad of applications. The tensor rank decomposition is to write the tensor as a minimal sum of simple rank-1 tensors. In practice, the presence of noise in the tensor's inputs means that computing an approximated low rank decomposition is more relevant than computing the exact tensor rank decomposition. This problem is known as the low rank tensor approximation problem. In this talk, we discuss the low rank tensor approximation problem for symmetric tensors i.e. tensors with unchanged entries under any permutation of their indices. The symmetric tensors are considered with complex coefficients. We present a Riemannian optimization approach proposing Riemannian Newton and Riemannian Gauss-Newton algorithms to solve this problem. We show how the low rank symmetric tensor approximation problem can be used for tackling the problem of recovering spherical Gaussian mixture models from datasets, where the tensor is built from empirical moments of the data distribution.
Clément Elvira (IETR, Rennes)
December, 9th 2021, 10h-11h
Safe screening: introduction and perspectives
Simon Barthelmé (Gipsa-Lab, France)
November, 19th 2021, 15h-16h
Smoothing (Large) Graph Signals using Random Forests
A natural way of denoising graph signals is to penalise local variation using the graph Laplacian. Because this has worst-case cost O(n^3) in the number of nodes, exact methods cannot be used for very large graphs. I'll show how a simple stochastic process can be used to obtain fast unbiased estimators for the smoothed signal. I'll introduce some variance reduction techniques, including a gradient-descent technique that works more generally whenever an unbiased estimator of a least-squares problem is available. Joint work with Yigit Pilavci, Nicolas Tremblay, P-O Amblard
Mariya Ishteva (KU Leuven, Belgium)
October, 21th 2021, 14h30-15h
Tensor methods with applications in system identification
TBD
Jean-Yves Tourneret (ENSEEIHT, Toulouse)
October, 21th 2021, 15h-15h40
Hypersphere Fitting: Model, Algorithms and Future Work
We will present a recent EM algorithm for hypersphere fitting based on a von Mises-Fisher prior. The algorithm achieves competitive performance compared to the state-of-the-art. In addition, it can be easily robustified to mitigate the presence of potential outliers. After presenting some results obtained with this algorithm, we will discuss some open issues related to the application to LiDAR point clouds. These open issues include the consideration of mixtures of hyperspheres, the segmentation and denoising of LiDAR point clouds and the fusion of point clouds with RGB images
Eric Chaumette (ISAE-Supaéro, Toulouse)
October, 21th 2021, 15h40-16h10
Robust Linearly Constrained Filtering and Smoothing: Results and Applications
It is well-known that Wiener and Kalman filter (KF) like techniques are sensitive to misspecified covariances, uncertainties in the system matrices and parameters, filter initialization or unexpected system behaviors induced by time-varying environments, harsh propagation conditions, malicious interferences or unmodeled inputs. In this talk we introduce a possible solution to robustify these estimation techniques through linear constraints (LCs): i) we detail the linearly constrained KF (LCKF), where a set of non-stationary LCs can be set at every time step, ii) we show how to use such LCs to mitigate modeling errors in general mismatched linear discrete state-space models, and iii) we point the reader to some recent LCKF extensions (i.e., information filter, invariant filter, linear smoother and LC-extended/cubature-KF). Some applications of interest are provided to support the discussion: robust array processing, GNSS position and attitude estimation, invariant navigation and visual SLAM.
Radu Ranta (CRAN, Université de Lorraine)
June, 24th 2021, 10h-11h
Low-Rank Inverse Problems in Brain Signal Processing
The presentation will start by introducing the basics principles of biophysics allowing to model the electrophysiological brain measurements (EEG / SEEG / micro-electrodes), and more precisely the relations between the neural current sources and the electrodes. Once the signal model defined, I will briefly present some of the classical methods for solving the inverse problem of brain sources estimation (localization and activity), and I will focus next on our work on sparse approximations and low-rank (exact and approximate) source estimates.
Gaëtan Frusque (ETH Zürich)
April, 16th 2021, 10h-11h
Inférence et décomposition de graphes dynamiques en neurosciences
Dynamic graphs make it possible to understand the evolution of complex systems which evolve through time. In this thesis, we look at their applications to understand one of the most common neurological disorder in the world, affecting around 1% of the population: epilepsy. A complete and objective characterization of the patient-specific dynamic graph describing this pathology is crucial for optimal surgical treatment. First, we propose to modify a measure of functional connectivity, the Phase-Locking-Value, in order to infer robust dynamic graph from the neurophysiological signal recorded during an epileptic seizure. Constrained matrix decomposition method is applied to extract the principal features from the dynamic graph describing the pathology. Finally, a clinical study is performed to compare the obtained features from the visual interpretation of a clinician specialized in neurophysiological signal interpretation.
Titouan Parcollet (Université d’Avignon)
March, 24th 2021, 14h-15h
Should we use quaternion neural networks? Recent advances and limitations.
Real-world data used to train modern artificial neural networks reflect the complexity of the environment that we are evolving in. As a consequence, they are neither flat, nor decorrelated nor one dimensional. Instead, scientists have to deal with composed and multidimensional entities, that are characterized by multiple related components, such as color channels describing a single color pixel of an image, or the 3D coordinates of a point denoting the position of a robot. Surprisingly, recent advances on deep learning are mainly focused on developing novel architectures to extract even more relevant and robust high level representations from the input features, while the latter are still poorly considered at a lower and basic level, by being processed with one dimensional real-valued neural models. Neural networks based on complex and quaternion numbers have been used sparsely for many decades. Nonetheless, due to new statements and proofs about the benefits of these models over real-valued ones on many real-world tasks, quaternion based neural networks have been increasingly employed, and novel quaternion based architectures have been proposed. This talk will detail quaternion neural networks architectures for artificial intelligence related tasks, such as image processing, or speech recognition, by introducing first the basics of quaternion numbers, and then describing recent advances on quaternion neural networks with the quaternion convolutional (Interspeech 2018, ICASSP 2019) and recurrent neural networks (ICLR 2019, Interspeech 2020). This presentation will also show their benefits in terms of performances obtained in different tasks, as well as in terms of neural parameters required for learning. Finally, the talk will outline important future research directions to turn quaternion neural networks into a mandatory alternative to real-valued models for real-world tasks.
Konstantin Usevich (CRAN, SiMul)
February, 05th 2021, 10h-11h
Kernel matrices in the flat limit
Kernel matrices are ubiquitous in statistics and machine learning, where they occur most often as covariance matrices of Gaussian processes, in non-parametric or semi-parametric models. In approximation theory, they appear, for example, in approximation and interpolation with radial basis functions. Most of the theoretical work on kernel methods has focused on a large-n asymptotics, characterising the behaviour of kernel matrices as the amount of data increases. Fixed-sample analysis is much more difficult outside of simple cases, such as locations on a regular grid. In this talk I will describe a fixed-sample analysis that was first studied in the context of approximation theory by Fornberg & Driscoll (2002), called the “flat limit”. In flat-limit asymptotics, the goal is to characterise kernel methods as the length-scale of the kernel function tends to infinity, so that kernels appear flat over the range of the data. While the resulting kernel matrix becomes singular, fascinatingly, the interpolation and regression problems remain well-defined in the limit. I the talk, I will mainly report recent results on spectral properties of kernel matrices ( https://arxiv.org/abs/1910.14067). In the flat limit, different types of kernels behave differently, and what matters most is the smoothness of kernel functions. The flat limit also highlights the close kinship between kernel methods and polynomial and spline regression. If time permits, I will discuss some implications for GP regression and Determinantal Point Processes. This is joint work with S. Barthelmé, N. Tremblay and P.-O. Amblard (GIPSA-lab, Grenoble).
Fateme Ghayem (Gipsa-Lab, Grenoble)
January, 13th 2021, 10h-11h
Optimal sensor placement for signal extraction
Many signal processing problems can be cast from a generic setting where a source signal propagates through a given environment to some sensors. Under this setting, we can be interested either in (i) estimating the source signal, or (ii) the environment, or even (iii) the resulting field of signals in some regions of the environment. In all these cases, signals are recorded by multiple sensors located at different positions. Due to price, energy, or ergonomic constraints, the number of sensors is often limited and it becomes crucial to place a few sensors at positions that contain the maximum information. This problem corresponds to optimal sensor placement and it appears in a great number of applications. The way to tackle the problem of optimal sensor placement depends on which of the three aspects mentioned above we want to address. In this talk, we focus on estimating a source signal from a set of noisy measurements collected from a limited number of sensors, and we present new criteria as well as algorithms. Specifically, our first proposed criterion maximizes the average signal to noise ratio (SNR) of the estimated signal, and we experimentally show that the performance obtained by this criterion outperforms the results obtained using classical Kriging-based methods. Since the SNR is uncertain in this context, to achieve a robust signal extraction, we propose a second placement criterion based on the maximization of the probability that the output SNR exceeds a given threshold. This criterion can be easily evaluated using a Gaussian process assumption for the signal, the noise, and the environment. Moreover, to reduce the computational complexity of the joint maximization of the criterion with respect to all sensor positions, we propose a greedy algorithm where the sensor positions are sequentially (i.e. one by one) selected. Finally, for improving the sub-optimal greedy algorithm, we present an optimization approach to locate all the sensors at once. For this purpose, we add a constraint to the problem that can control the average distances between the sensors. To solve our problem, we use an alternating optimization penalty method.
Nikola Besic (Centre Météorologie Radar de Météo-France, Toulouse)
December, 14th 2020, 14h-15h
Mes expériences dans la télédétection radar : la Terre depuis le ciel, le ciel depuis la Terre, et comment profiter des deux ?
La télédétection radar est une discipline qui repose sur le traitement du signal, d'images et de données, et sur la physique. Caractérisée par les nombreuses spécificités par rapport aux autres moyens de télédétection, cette discipline s'est montrée indispensable dans l'observation de la Terre et de l'atmosphère. De plus, il s'agit d'un domaine qui a motivé et permis de nombreux travaux de recherche concernant l'analyse statistique du signal, d'images et de données. Nikola Besic partagera avec nous certaines de ces expériences dans l'observation de la Terre depuis le ciel, et dans l'observation du ciel depuis la Terre, au moyen de radar. La première partie de son exposé adresse alors les travaux effectués sur le sujet du Radar à Synthèse d'Ouverture, satellitaire et polarimétrique, qui incluent la problématique des modèles statistiques et de la décomposition, ainsi que l'application dans le contexte des études de la cryosphère. La deuxième partie concerne plutôt l'observation de l'atmosphère avec un radar toujours polarimétrique, mais cette fois-ci terrestre, et ses efforts de trouver un compromis entre la physique et l'apprentissage automatique à partir de données, dans un contexte des méthodes de classification semi-supervisée.