Explore chapters and articles related to this topic
Introduction to Sonification
Published in Michael Filimowicz, Foundations in Sound Design for Embedded Media, 2019
Considering that sound perception is time-based, sonification is by and large focused on rendering continuous data stream over time, such as the changes in weather or stock market data, or engine noise. The sonification timeline does not have to match the normal passage of time and, like video, it can be played faster or slower, as well as forwards and backwards. Rather than continually outputting sound, sonification can also focus on punctuating a specific threshold, as is the case in our imaginary car ride where a single sound is played once the fuel level reaches a previously designated low point. As a result, such a cue or an earcon can be seen as a subset of sonification. Unlike the continuous data stream examples, however, earcons are typically interpretations of previously known patterns or conditions (e.g. a previously designated low fuel tank level), whereas continuous streams tend to be raw and unprocessed and left up to our brains to interpret and spot any potential patterns, as is the case with our skipping CD example. Furthermore, the continuous data streams can be generated in two different ways—by taking the data and feeding it directly into an audio output (e.g. a loudspeaker) with minimal processing, or by using it to modify a property of an engineered sound, such as its pitch, timbre, loudness, location, or a combination thereof.
Smart textiles in the performing arts
Published in Gianni Montagna, Cristina Carvalho, Textiles, Identity and Innovation: Design the Future, 2018
Aline Martinez, Michaela Honauer, Hauke Sandhaus, Eva Hornecker
In general, sonification can be understood as ‘visualizing with sound’. Houri et al. (2011) use the term ‘audiolizing’ to describe body movements that are translated into sound. Sonification is frequently used to represent sensor data (for example geiger counter clicks) with non-speech audio. Hermann et al. (2011) categorize three sonification functions as alarms, alerts, and warnings; status, process and monitoring messages; and data exploration. They add a fourth function: art and entertainment. Various costumes and sensors exist that can create music, or sound, from body movements. However, these devices are aimed at musicians, and not dancers. In collaboration with computer engineers and other artists, the British musician Imogen Heap created gloves that can make music (imogenheap. co.uk/thegloves). With devices such as this, a performer’s movements are made with respect to the musical notes, harmonics and beats the musician want to achieve. In contrast, the sound output of a costume for dancers should depend on the dance poses and body movements of the performers.
Principles of Symbolization
Published in Terry A. Slocum, Robert B. McMaster, Fritz C. Kessler, Hugh H. Howard, Thematic Cartography and Geovisualization, 2022
Terry A. Slocum, Robert B. McMaster, Fritz C. Kessler, Hugh H. Howard
Abstract sounds have no obvious meaning and thus require a legend to explain their use. For example, imagine a map of census tracts with the title “Median Income, 1997” in which different magnitudes of loudness represent different incomes (e.g., a mouse click on a high-income tract would produce a louder sound than a mouse click on a low-income tract). To understand the magnitudes of loudness, a legend indicating that a higher magnitude represents a higher income would have to be provided (if the reader were blind, someone would have to either tell the reader what each magnitude of loudness indicates or give the reader a tactile legend). The process of creating abstract sounds is sometimes referred to as sonification.
Musical sonification supports visual discrimination of color intensity
Published in Behaviour & Information Technology, 2019
Sonification, the transformation of data into sound, can be used to supplement the visual modality when a user studies a visualisation of data to further support understanding of the visual representation (Kramer et al. 2010; Hermann, Hunt, and Neuhoff 2011; Pinch and Bijsterveld 2012; Franinovic and Serafin 2013). Traditionally, sonification is audification of data, where data might be converted to a sound-wave or translated into frequencies (Hermann, Hunt, and Neuhoff 2011; Pinch and Bijsterveld 2012). However, it could be questioned to what extent this type of sonification is able to convey information and meaning to a user. Going beyond plain audification of data (Philipsen and Kjaergaard 2018), sonification can be approached by deliberately designing and composing musical sounds. Even though the concept of sonification for data exploration is not new (see for example Flowers, Buhman, and Turnage 2005), there are few examples of studies that evaluate visualisation and sonification as a combination (see for example Flowers, Buhman, and Turnage 1997; Nesbitt and Barrass 2002; Kasakevich et al. 2007; Riedenklau, Hermann, and Ritter 2010; Rau et al. 2015). These studies suggest that there is a benefit of sonification in connection to visualisation; however, few studies explored the appreciation of the sounds in the sonification or the use of musical sounds. Musical sounds are here referred to as deliberately designed and composed sounds, based on a music-theoretical and aesthetic approach.
From Visual Art to Music: Sonification Can Adapt to Painting Styles and Augment User Experience
Published in International Journal of Human–Computer Interaction, 2023
Chihab Nadri, Chairunisa Anaya, Shan Yuan, Myounghoon Jeon
The application of sonification for visually impaired individuals has shown promise in the past, with ongoing research seeking to expand accessibility access of different experiences for these individuals (Iakovidis et al., 2020; Sekhavat et al., 2022). Dynamic data sonification has also been an area of research in the past, with applications for artistic experiences at aquariums (Jeon et al., 2012) as well as use with visually impaired individuals (Ji et al., 2021) or with gesture sonification (Vatavu, 2017).