Diffeomorphic interpolation for efficient topological optimization
Abstract:
Topological Data Analysis (TDA) provides a pipeline to extract quantitative topological descriptors from structured objects. This enables the definition of topological loss functions, which assert to what extent a given object exhibits some topological properties. These losses can then be used to perform topological optimization via gradient descent routines. While theoretically sounded, topological optimization faces an important challenge: gradients tend to be extremely sparse, in the sense that the loss function typically depends on only very few coordinates of the input object, yielding dramatically slow optimization schemes in practice.
Focusing on the central case of topological optimization for point clouds, in this talk I will present a way to overcome this limitation using diffeomorphic interpolation, turning sparse gradients into smooth vector fields defined on the whole space, with quantifiable Lipschitz constants.
In particular, I will show that this approach combines efficiently with subsampling techniques routinely used in TDA, as the diffeomorphism derived from the gradient computed on a subsample can be used to update the coordinates of the full input object, which allows to perform topological optimization on point clouds at an unprecedented scale.
Finally, I will also showcase the relevance of this approach for black-box autoencoder (AE) regularization, where the aim is to enforce topological priors on the latent spaces associated to fixed, pre-trained, black-box AE models, and where I will show that learning a diffeomorphic flow can be done once and then re-applied to new data in linear time (while vanilla topological optimization has to be re-run from scratch). Moreover, reverting the flow allows to generate data by sampling the topologically-optimized latent space directly, yielding better interpretability of the model.