Parallel Rank-Adaptive Integrators for Dynamical Low-Rank Approximation
Please login to view abstract download link
Dynamical Low-Rank Approximation (DLRA) offers a powerful model order reduction (MOR) framework for simulating high-dimensional dynamical systems efficiently. In this talk, we present a recent advancement in this area: a parallel rank-adaptive integrator that updates the low-rank basis and coefficients concurrently, eliminating sequential dependencies and enhancing robustness in the presence of small singular values. This method integrates an embedded error estimator and a novel step rejection strategy, ensuring reliable performance across time scales. We demonstrate the applicability of this approach in two major directions. First, we extend the integrator to tree tensor networks, enabling scalable and parallel simulation of high-dimensional dynamics with strong relevance to quantum systems, kinetic equations, and uncertainty quantification. Second, we apply low-rank geometric integration principles to machine learning, introducing GeoLoRA—a method for parameter-efficient fine-tuning of large neural networks via dynamic rank-adaptive updates. This framework achieves state-of-the-art results while controlling model complexity. These contributions address key challenges in the simulation and control of complex systems, balancing computational cost and accuracy via low-dimensional representations. By connecting recent advances in numerical integration, model reduction, and machine learning, we aim to contribute to the development of scalable methods for control and simulation across scientific domains.