
Location: Faculté des Sciences et Technologies, Université de Lorraine, Boulevard des Aiguillettes, Vandoeuvre-lès-Nancy.
Date: Wednesday June 10th, 2026.
This one-day workshop will focus on the theoretical study of neural networks and tensor decompositions using geometric tools. The main topic is the geometry of the corresponding algebraic varieties: neurovarieties (in case of neural networks) and secant varieties (for tensor decompositions). In machine learning theory, understanding geometry of neurovarieties has proven to be the key to reveal many of their fundamental properties such as their identifiability, expressivity, and the behavior of optimization algorithms (see, for example, neuroalgebraicgeometry.ai ). The workhop will present recent developments and discuss connections between neural networks and tensor decompositions.
This is a follow-up of the workshop on geometry of tensors organized in 2025.
Registration is free but mandatory (before May 21) (registration link) You are welcome to propose a short talk or poster presentation.
| Time | Session |
|---|---|
| 09:15-09:30 | Opening remarks |
| 09:30-10:45 | Kathlén Kohn Algebraic Neural Network Theory (abstract) |
| 10:45-11:15 | Coffee break |
| 11:15-12:30 | Alex Massarenti Bronowski’s Conjecture, Identifiability, and Neurovarieties (abstract) |
| 12:30-14:00 | Lunch break |
| 14:00-15:00 | Maksym Zubkov TBD |
| 15:00-17:00 | Contributed talks/poster session |
Algebraic Neural Network Theory
Abstract: The space of functions parametrized by a fixed neural network architecture is known as its ’neuromanifold’, a term coined by Amari. Training the network means to solve an optimization problem over the neuromanifold. Thus, a complete understanding of its intricate geometry would shed light on the mysteries of deep learning. This talk explores the approach to approximate neural networks by algebraic ones that have semialgebraic neuromanifolds. Such approximation is possible for any continuous network on a compact data domain. By the universal approximation theorem, algebraic neural networks are essentially the only ones whose neuromanifolds span finite-dimensional ambient spaces. In this setting, we can interpret training the network as finding a ‘closest’ point on the neuromanifold to some data point in the ambient space. This perspective enables us to understand the loss landscape better, which is the graph of the loss function over the neuromanifold. In particular, the singularities (and boundary points) of the neuromanifold can cause a tradeoff between efficient optimization and good generalization: On the one hand, singularities can yield numerical instability and slow the learning process (which was already observed by Amari). On the other hand, we will observe how the same singularities cause implicit bias to stable and sparse solutions. Computing the singularities is often a technical endeavor, and requires us to determine both the hidden parameter symmetries of the network and the critical points of the network’s parametrization map. This talk overviews how machine-learning concepts can be formulated in algebro-geometric terms and compares 3 popular architectures: multilayer perceptrons, convolutional networks, and self-attention networks.
Bronowski’s Conjecture, Identifiability, and Neurovarieties
Abstract: I will discuss recent results, obtained in collaboration with Massimiliano Mella, on polynomial neural networks and their associated neurovarieties, focusing on expected dimension, non-defectiveness, and global identifiability. I will then relate these ideas to Bronowski-type criteria for identifiability, including an amended form of Bronowski’s conjecture that reduces identifiability questions to secant defectiveness for a broad class of varieties.
TBD
Contact: firstname.lastname @ univ-lorraine.fr
© 2026 Geometry of Neural Networks and Tensors in Nancy