ATML Advanced Time Integrators

The efficient use of modern high performance computing (HPC) systems has become one of the key challenges in computational science. For time-dependent differential equations, solvers and function evaluations in space have been the prime target for parallelization and performance optimization. Yet, the time integration schemes used to propagate solutions forward in time are equally important to optimize. This Algorithms, Tools and Methods Laboratory at JSC focusses primarily on the design, analysis, implementation and optimization of advanced time integrators (ATI) and space-time multilevel methods for extreme scale HPC systems. This includes parallel-in-time integration techniques, fault-tolerant and adaptive time-stepping methods, high-order schemes on accelerators and other topics of relevance to the community.

Contact: Dr. Ruth Schöbel

Research Topics

Parallelization across the steps using multilevel methods

To obtain large-scale parallelization in time, various methods such as Parareal or the parallel full approximation scheme in space and time (PFASST) can be used to integrate multiple steps simultaneously. To overcome the inherent serial dependence in the time direction, these approaches typically introduce a space-time hierarchy, where integrators with different costs are coupled in an iterative fashion. Serial dependencies are shifted to the coarsest level, allowing the computationally expensive parts on finer levels to be treated in parallel. These methods show a strong relationship to linear or nonlinear multigrid methods and can be analyzed in a similar way.
Ref: M. L. Minion, R. Speck, M. Bolten, M. Emmett, and D. Ruprecht, Interweaving PFASST and Parallel Multigrid, SIAM Journal on Scientific Computing, 37(5), 244 - 263, 2015.



Parallelization across the steps using diagonalization

In order to avoid coarsening with all its pitfalls, diagonalization-based methods make use of block-circulant preconditioners to parallelize the integration of multiple time-steps. These preconditioners can be diagonalized efficiently using Fast Fourier Transformations (FFT) in time and while this approach works well even for hyperbolic problems, it's direct application is restricted to linear problems. The key question addressed in this field of research is how to obtain efficient parallel integrators for nonlinear problems.
Ref: Gayatri Caklovic, Robert Speck, Martin Frank, A parallel implementation of a diagonalization-based parallel-in-time integrator, arXiv:2103.12571 [math.NA], submitted.



Parallelization across the method

If the application of high-order, multi-stage time integrators is possible or even required, another way to introduce parallelization in time is the usage of stage-parallel integrators. While the potential for parallelism is naturally limited here, the implementation is rather straightforward and the efficiency is usually favorable. Yet, finding good stage-parallel methods is the major challenge. The group primarily focuses on spectral deferred corrections with parallel preconditioners. Both artificial and natural intelligence can be helpful ingredients here.
Ref: Schöbel, R., Speck, R. PFASST-ER: combining the parallel full approximation scheme in space and time with parallelization across the method. Comput. Visual Sci. 23, 12, 2020.







Fault-tolerant and adaptive time integrators

Many PinT methods share features that make them natural candidates for algorithmic-based fault tolerance (ABFT): they hold copies of the (approximate) solution at different times on different processors and they are iterative and/or hierarchical by nature. Since time stepping is typically the outermost loop for the numerical solution of a time-dependent partial differential equation, protecting it by ABFT covers a larger area of the code. Efforts to provide ABFT, for instance based on adaptivity in time, can boost resilience and computational efficiency at the same time and are very promising.
Ref: Robert Speck, Daniel Ruprecht , Toward fault-tolerant parallel-in-time integration with PFASST, Parallel Computing, Vol.62, 20-37, 2017.




Further Activities, Team Members

Last Modified: 31.01.2024