Explore chapters and articles related to this topic
Introduction to Sonification
Published in Michael Filimowicz, Foundations in Sound Design for Embedded Media, 2019
The origins of the term sonification are not entirely clear. Arguably its first official mention in an academic publication dates back to 1990 (Rabenhorst et al. 1990) when it referred to a complementing aural counterpart to data visualization. Since then, its definition has continually evolved, resulting in a number of similar yet distinct definitions, many of which continue to be used concurrently. Worrall, in his overview of sonification (Worrall 2009), presents a comprehensive review of the term’s evolution, while also offering the all-encompassing definition:Data sonification is the acoustic representation of data for relational interpretation by listeners, for the purpose of increasing their knowledge of the source from which the data was acquired.Another, considerably simpler, version offered by Kramer et al. (1999) defines sonification as “the use of non-speech audio to convey information.” In this chapter, allow me to propose what may be arguably the simplest and most inclusive version, which defines sonification as “audio that conveys information,” thereby leaving all the possibilities for sonification open, including human speech, music and even environmental sounds, as is the case with the rain and wind noises in our imaginary car ride example. Unlike the dashboard sounds that have been engineered by humans, the environmental sounds simply exist due to the laws of nature that govern our universe. As such, they could be seen as unintentional but nonetheless useful byproducts whose usefulness is attained through repeated exposure and growing sensitivity to their nuances.
Sonification Use Cases in Highly Automated Vehicles: Designing and Evaluating Use Cases in Level 4 Automation
Published in International Journal of Human–Computer Interaction, 2023
Chihab Nadri, Sangjin Ko, Colin Diggs, Michael Winters, Sreehari Vattakkandy, Myounghoon Jeon
Sonification, which is transcribing data into non-speech sound (Nees & Walker, 2011), is a display method that has been suggested for automated vehicles (e.g., driving data sonification) (Landry et al., 2016). This display method has been used to transcribe vehicle states, intentions, or user emotions to increase situation awareness (SA) (Gang et al., 2018; Landry et al., 2016). SA can be defined as the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future (Endsley, 1988a).
From Visual Art to Music: Sonification Can Adapt to Painting Styles and Augment User Experience
Published in International Journal of Human–Computer Interaction, 2023
Chihab Nadri, Chairunisa Anaya, Shan Yuan, Myounghoon Jeon
The application of sonification for visually impaired individuals has shown promise in the past, with ongoing research seeking to expand accessibility access of different experiences for these individuals (Iakovidis et al., 2020; Sekhavat et al., 2022). Dynamic data sonification has also been an area of research in the past, with applications for artistic experiences at aquariums (Jeon et al., 2012) as well as use with visually impaired individuals (Ji et al., 2021) or with gesture sonification (Vatavu, 2017).