Adaptive stochastic Galerkin FEM with hierarchical tensor representations
- Eigel, Martin
- Pfeffer, Max
- Schneider, Reinhold
2010 Mathematics Subject Classification
- 35R60 47B80 60H35 65C20 65N12 65N22 65J10
- partial differential equations with random coefficients, tensor representation, tensor train, uncertainty quantification, stochastic finite element methods, operator equations, adaptive methods, ALS, low-rank, reduced basis methods
The solution of PDE with stochastic data commonly leads to very high-dimensional algebraic problems, e.g. when multiplicative noise is present. The Stochastic Galerkin FEM considered in this paper then suffers from the curse of dimensionality. This is directly related to the number of random variables required for an adequate representation of the random fields included in the PDE. With the presented new approach, we circumvent this major complexity obstacle by combining two highly efficient model reduction strategies, namely a modern low-rank tensor representation in the tensor train format of the problem and a refinement algorithm on the basis of a posteriori error estimates to adaptively adjust the different employed discretizations. The adaptive adjustment includes the refinement of the FE mesh based on a residual estimator, the problem-adapted stochastic discretization in anisotropic Legendre Wiener chaos and the successive increase of the tensor rank. Computable a posteriori error estimators are derived for all error terms emanating from the discretizations and the iterative solution with a preconditioned ALS scheme of the problem. Strikingly, it is possible to exploit the tensor structure of the problem to evaluate all error terms very efficiently. A set of benchmark problems illustrates the performance of the adaptive algorithm with higher-order FE. Moreover, the influence of the tensor rank on the approximation quality is investigated.
- Numer. Math., 136 (2017) pp. 765--803.