I am interested in geometry of a space of totally non-degenerate real quadratic forms.
They are local models of ridgy Lagrangians introduced as a tool in the arborealization program.
There is an equivalent notion of principally regular matrices i.e. a space of real symmetric matrices
all whose principal minors are non-zero.
I formulated the conjecture that every connected component of the
space is contractible. Studying semialgebraic sets that are given via the sign pattern assignments to
all principal minors is equivalent to studying orientations on the uniform-oriented Lagrangian
matroid.
I am interested in exploring the connection between Deep Polynomial Neural Networks and Tensor Decomposition.
I am fascinated with algebra of neural networks, thinking about the topology of the image of parameter space inside
the model space. Also, it leads to some thoughts about the landscape of a loss function during the training process.
One of the main goal is to get a better understanding of these well described neural networks and, possibly, jump on
CNN networks and other architectures.
For example, shallow polynomial networks with a single output has a space of homogeneous polynomials
as a model space and training that network is equivalent to finding a symmetric CP-rank approximation
of a given symmetric tensor.
I am also interested in thinking about applications of Direction 1 and 2 in algebraic statistics
(Determinantal Point Process, Directed Graphical Models) and in non-convex optimization
(a landscape of a loss function).
One of the main sources of my inspiration and motivation is to do numerical experiments.
For Direction I, I generate a point cloud approximation of each connected component and study its topology using
ripser i.e. persistent homology. I worked with a point cloud of sizes up to a million points which brings its
challenges. I use Lawrence Berkeley National Laboratory clusters to do my computation in Python and C++.
For Direction II, I have constructed polynomial neural networks of different architectures to observe
computationally my theoretical findings and to experiment with certain architectures to make conjectures or
understand the geometry and behavior of neural network architectures.
For Direction III, I use homotopy
continuation to count the number of critical points for the log likelihood function and a combination of
persistence homology and principal component analysis to understand a landcsape of a loss function and behavior
of a neural network during training process.
Date |
Title |
Location |
2024 June |
"Tensors: Algebra-Geometry-Applications" |
Fort Collins, USA |
2024 February |
"Tensor Networks" |
Los Angeles, USA |
2024 January |
"Winter Program in Machine Learning" |
Austin, USA |
2024 January |
"Connecting Higher-Order Statistics and Symmetric Tensors" |
Providence, USA |
2023 Fall |
"Apprenticeship Week: Varieties from Statistics" |
Chicago, USA |
2023 August |
"Numbers in the Universe" |
Kyiv, Ukraine |
2023 July |
"SIAM Conference on Applied Algebraic Geometry" |
Eindhoven, The Netherlands |
2023 June |
"Algebraic and topological interplay of algebraic varieties" |
Jaca, Spain |
2023 June |
"Let’s get $\mathbb{R}$eal" |
Leipzig, Germany |
2018 January |
"Joint Mathematics Meetings" |
San Diego, USA |
2017 November |
"Field of Dreams Conference" |
St. Louis, USA |
2017 August |
"Interactions between Representation Theory and Algebraic Geometry" |
Chicago, USA |
2017 July |
"MAA MathFest Conference" |
Chicago, USA |
You can reach me by email at mzubkov at berkeley dot edu.