Explore chapters and articles related to this topic
Introduction
Published in Terry A. Slocum, Robert B. McMaster, Fritz C. Kessler, Hugh H. Howard, Thematic Cartography and Geovisualization, 2022
Terry A. Slocum, Robert B. McMaster, Fritz C. Kessler, Hugh H. Howard
Outside geography, the term visualization has its origins in a special issue of Computer Graphics authored by Bruce McCormick and colleagues (1987). To McCormick et al., the objective of scientific visualization was “to leverage existing scientific methods by providing … insight through visual methods” (3). Work in scientific visualization extends far beyond the realm of spatial data, which geographers deal with, to include topics such as medical imaging and visualization of molecular structure and fluid flows. A classic reference is Peter Keller and Mary Keller's (1993) Visual Cues: Practical Data Visualization, which provides numerous examples of the use of scientific visualization; Helen Wright (2007) provides a more recent summary of the field. The most recent developments in scientific visualization can be found in the proceedings of the Institute of Electrical and Electronics Engineers (IEEE) Visualization conference, which has been held every year since 1990. Perhaps not surprisingly, the 2021 conference had a heavy focus on visual analytics (http://ieeevis.org/year/2021/welcome).
Exploratory Data Analysis and Data Visualization
Published in Chong Ho Alex Yu, Data Mining and Exploration, 2022
Nonetheless, neither EDA nor TQM has popularized the term “visualization.” The term “scientific visualization” was coined by a panel of the Association for Computing Machinery (ACM) organized by the National Sciences Foundation’s Division of Advanced Scientific Computing (McCormick et al. 1987). At first, the term “scientific visualization” exclusively referred to visualization in scientific and engineering computing, such as computer modelling and simulation. Later visualization practices included data sources from other disciplines, and eventually this movement merged with the movement of “information visualization,” which also started in the early 1990s (Herman et al. 2000; Post et al. 2002).
EdgeCFD: a parallel residual-based variational multiscale code for multiphysics
Published in International Journal of Computational Fluid Dynamics, 2020
Adriano M. A. Cortes, Erb F. Lins, Gabriel M. Guerra, Rômulo M. Silva, José L. D. Alves, Renato N. Elias, Fernando A. Rochinha, Alvaro L. G. A. Coutinho
The increase in computational power has been allowing high fidelity simulations, with a direct impact on the amount of data to be stored. Storage requirements in numerical simulations are not only related to the level of refinement in time and space, but also to the length of the simulation (in time) as well as the frequency that files must be saved. In the standard simulation workflow, these files are usually post-processed in a visualisation tool to build charts, iso-surface plots, and querying the data, which helps to gain insight into the simulation results. The standard way to reduce storage demand is reducing the frequency that solution files are stored. However, it introduces a new issue: the discarded time steps have to be interpolated in the post-process phase, losing part of the accuracy obtained with the refinement in time. It is becoming popular to modify the standard workflow to overlap or interleave the solution and post-processing steps, in a technique called co-processing or in-situ visualisation (Moreland et al. 2018) to preserve information while reducing storage requirements. For this purpose, the visualisation software must work along with the simulation tool, preferably, sharing the same memory area. ParaView (Moreland et al. 2018)1, an open-source scientific visualisation software provides an API specifically designed for large scale and parallel co-processing called Catalyst (Ayachit et al. 2015). In Catalyst, an adaptor must be implemented to start, finish, and share the data of the solver with the visualisation tool at runtime.