Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders

Network architecture of the deep convolutional autoencoder, which takes a state of dynamical systems as an input and produces an approximated state as an output. The encoder extracts low-dimensional code from the discrete representation of the state by applying the restriction operator and a set of convolutional layers (gray boxes), followed by a set of fully-connected layers (blue rectangles). The decoder approximately reconstructs the high-dimensional vector h(x; θ) by performing the inverse operations of the encoder, applying fully-connected layers (blue rectangles), followed by the transposed convolutional layers (gray boxes), then applying the prolongation operator to the resulting quantity.


Nearly all model-reduction techniques project the governing equations onto a linear subspace of the original state space. Such subspaces are typically computed using methods such as balanced truncation, rational interpolation, the reduced-basis method, and (balanced) POD. Unfortunately, restricting the state to evolve in a linear subspace imposes a fundamental limitation to the accuracy of the resulting reduced-order model (ROM). In particular, linear-subspace ROMs can be expected to produce low-dimensional models with high accuracy only if the problem admits a fast decaying Kolmogorov $n$-width (e.g., diffusion-dominated problems). Unfortunately, many problems of interest exhibit a slowly decaying Kolmogorov $n$-width (e.g., advection-dominated problems). To address this, we propose a novel framework for projecting dynamical systems onto nonlinear manifolds using minimum-residual formulations at the time-continuous and time-discrete levels; the former leads to extit{manifold Galerkin} projection, while the latter leads to extit{manifold least-squares Petrov–Galerkin} (LSPG) projection. We perform analyses that provide insight into the relationship between these proposed approaches and classical linear-subspace reduced-order models. We also propose a computationally practical approach for computing the nonlinear manifold, which is based on convolutional autoencoders from deep learning. Finally, we demonstrate the ability of the method to significantly outperform even the optimal linear-subspace ROM on benchmark advection-dominated problems, thereby demonstrating the method's ability to overcome the intrinsic $n$-width limitations of linear subspaces.

Journal of Computational Physics, Vol. 404, p. 108973 (2020)