Home Publications Sample Codes Visualizations People Contact & Funding
Visualization of Time-Varying Adaptive Mesh Refinement Data

Project Scope

Adaptive mesh refinement (AMR) is used by 3D computational fluid dynamic (CFD) solvers to focus the computation on interesting areas. Popular AMR codes are NASA's LAVA fluid solver or FLASH by the University of Rochester. Examples of AMR topologies are Octrees, but other tree types, branching factors, shapes, etc. are possible.

AMR codes virtually always store the data (density, temperature, velocities, etc.) at the "cell centers". For that, imagine the cell to be a small box. "Vertex-centric" methods associate the data with the box corners, while "cell-centric" means the data is stored at the center of the box. Efficient and high-quality visualizations usually have to reconstruct the data at arbitrary sample points and for that need to be able to quickly identify rectangular neighborhoods.
In (a), with vertex-centric data, this is simple, as when we move up/down, left/right, from front to back, we eventually find the nearest data points for the sample position to form a rectangular region. In (b), with cell-centric data, finding that neighborhood is non-obvious even in the simplest of cases, and has led to visualization packages such as OSPRay to incorporate complicated auxiliary data structures to perform cell location. Other visualization packages support this kind of data, but don't effectively handle the "level boundaries" where cells of different sizes connect, and produce cracks and other artifacts when isosurfaces or volume renderings are generated from this kind of data.

In VTV-AMR, our focus is on crack-free visualizations with state-of-the-art GPU ray tracing technology to produce high-quality renderings, and hence we have to use the aforementioned auxiliary data structures; unfortunately, the memory footprint and construction times of such data structures are not targeted at real-time performance, nor at time-varying data comprised of multiple simulation time steps with adaptively changing AMR grids.

Our contributions will advance the state-of-the-art in high-quality AMR data reconstruction of cell-centric data on GPUs to interactively visualize data sets composed of 100s to 1000s of time steps. We focus on data where even single time steps saturate most of the available GPU memory. For that, we build on state-of-the-art software solutions that were recently published in leading international research journals, and that we will extend to support time-varying data.

Finally, here are some nice visualizations that we can already create with our software. For more of these, check out the Visualizations subpage.
The "NASA exajet" data set is courtesy Pat Moran. The molecular cloud data set is reused with friendly permission by Daniel Seifried with the Theoretical Astrophysics Group of the University of Cologne.


Journal Papers
Conference Papers (all peer-reviewed)
Technical Reports and Pre-prints

Sample Codes

Here we provide links to code fragments generated during the course of the project; VTV-AMR is a basic research project, so that the primary output are not (and cannot be) full-fledged (visualization) systems, but rather, prototypical sample applications that someone else could potentially integrate in their own visualization software.

Whenever possible, source code generated by VTV-AMR will be published under non-restrictive open source licenses such as Apache2 or MIT.

Interactive volume lines sample code (owlExaStitcher VIS 2023 Snapshot)
Link: https://github.com/owl-project/owlExaStitcher/tree/interactive-volume-lines
Description: Extension to owlExaStitcher, enabling an interactive visual analytics method we presented at VIS 2023 in Melbourne (this paper). The release can be found on the interactive-volume-lines branch. Teaser video: https://youtu.be/6g67sCP5JN4

Link: https://github.com/vtvamr/anari-volume-viewer
Description: Mini viewer application for ANARI volumes/spatial fields; supports structured-regular, AMR, and unstructured field types. Can, e.g., be used with owlExaStitcher's AMR ANARI implementation.

owlExaStitcher (EuroVis/CGF 2023 Snapshot)
Link: https://github.com/owl-project/owlExaStitcher
Description: Visualization prototype and data structure we presented at EuroVIS 2023; this software is grounded in the ExaBrick software (below) but can ports the visualization algorithm to use interactive path tracing. It also for comparison has a sampler that uses the original ExaBrick data structure.

FLASH to raw converter
Link: https://github.com/vtvamr/flash2raw
Description: Tool to convert from Rochester University's FLASH format to a structured raw format that can be read by virtually any volume renderer or visualization system out there; useful for validation. The tool uses a two-stage process; on the first stage, we generate cell data that is readable by ExaBricks. That intermediate representation can then be resampled to a uniform grid stored as a .raw file. This software is primarily used (and therefore only tested) on the Cheops HPC system at the University of Cologne.

ExaBrick (TVCG'21 Snapshot)
Link: https://github.com/owl-project/owlExaBrick
Description: Visualization prototype and data structure we presented at IEEE VIS 2020; optimized for NVIDIA RTX GPUs with hardware ray tracing cores. This software forms the basis for the developments in VTV-AMR. ExaBrick is based on OWL by Ingo Wald, with whom we collaborate on this project.


(Pics:) Click the thumbnail images for a larger version.

Interactive volume lines
Visual analytics method we presented at VIS'23 in Melbourne. The data set is the molecular cloud AMR data set by Seifried et al., this is an example where sci-vis and VA share the same data structures to accelerate visualization of complex topologies such as AMR. Find the paper pre-print/author version here: https://pds.uni-koeln.de/sites/pds/szellma1/template.pdf

Animated Exajet
Presented at EGPGV'22 in Rome. See the Publications page for the conference paper. Exajet (courtesy Pat Moran with NASA) has a fixed AMR grid that doesn't change over time. With that, it's different than other data we focus on; this however allowed us to focus on the async. streaming procedere from NVMe SSD to GPU. The scalar data of each frame is 2.54 GB in size; all the renderings in the video are interactive.

Particle Tracer
This visualization was generated in 2022 during the work on our CiSE paper on flow visualization with RT cores; flow-vis is one of our milestones on the application side.

Binary Neutron Star (Animation)
This visualization was used to create (offline) animations from a FLASH simulation (binary neutron star and common envelope ejection). See the arXiv pre-print on the Publications page. Videos can be found on Jamie Law-Smith's youtube channel: [link].

Molecular Clouds
Visualizations made in preparation for the original VIS paper on ExaBrick in 2020. The data are courtesy Daniel Seifried who's currently with the Theoretical Astrophysics Group of the University of Cologne. The simulations were generated using the FLASH AMR code.


Principal Investigator
PD Dr. Stefan Zellmann
Personal website: https://pds.uni-koeln.de/people/stefan-zellmann
Email: zellmann @ uni-koeln.de

Jingwen Yi
Email: j.yi @ uni-koeln.de

For more information about this project, or to get in touch, shoot a message to:

Stefan Zellmann (PhD)
University of Cologne, Parallel and Distributed Systems (Stefan Wesner's chair)
Weyertal 121
50931 Cologne (GER)
Email: zellmann @ uni-koeln.de


This project is supported by the German Research Foundation (DFG), under Grant No. 456842964.
More information can be found under this [link].