Explore chapters and articles related to this topic
Bringing Sound to Interaction Design
Published in Michael Filimowicz, Foundations in Sound Design for Embedded Media, 2019
Auditory display design literature posits that sound offers several benefits for interaction design. Concepts and guidelines in relation to notification and warning sounds, design concepts such as earcons and auditory icons, and sonification strategies for the representation of data through sound have been widely discussed and investigated (Neuhoff 2011). The relevance of sound for interaction design has also been recognized at the Interaction Design Department (IAD) of the Zurich University of the Arts (ZHdK), and sound design has been a fixed part of the curriculum since 2005, with students, lecturers and researchers equally contributing to the emerging field of Sonic Interaction Design (Franinović and Serafin 2013). But in interaction design education, which is multidisciplinary in nature, sound competes with many other areas of knowledge and skills such as user interface design and programming, user experience and service design, as well as electronics, digital fabrication and many more.
The Design of Interactive Real-Time Audio Feedback Systems for Application in Sports
Published in Veronika Tzankova, Michael Filimowicz, Interactive Sports Technologies, 2022
Nina Schaffert, Sebastian Schlüter
The research field of sonic interaction design explores methods to convey information (including aesthetic and emotional qualities) using sounds in interactive contexts (Rocchesso & Serafin, 2009; Hermann, Hunt & Neuhoff, 2011). This research field deals with the challenges of creating sound-mediated interactions by designing and implementing novel interfaces for creating sounds in response to human gestures. Sonic interaction design is closely related to the subtopic of the human-computer interaction (HCI) field known as sonification (Hermann, 2008). Such a research field addresses how information can be conveyed in an auditory, usually non-speech, form. Basically, data from various sources are transformed into sound to better assist the listener in understanding and interpreting the data. Interactive sonification is then the study of human interaction with a system that converts motion data into sound, and is a subfield of sonification (Hermann et al., 2011). Here, the action-perception loop resulting from the interaction with the developed interfaces is investigated. The user discovers how his/her gestures and movements modulate the sound. In the so-called movement sonification, which is also part of interactive sonification, the challenge is to control the execution of movements in response to sounds. Here, the auditory feedback can guide the user's actions by providing information on how the user can modify the actions themselves. In contrast to graphical user interfaces (GUIs) that are traditionally used in sports technique training such as graphs or video, sonification is particularly attractive because communicating through sound does not interfere with an athlete's vision, maintaining required visual stimuli for performance.
Gestural systems for the voice: performance approaches and repertoire
Published in Digital Creativity, 2018
The work employs string and percussive physical models created in computer environment Modalys (Eckel, Iovino, and Caussé 1995) to merge voice, pre-recorded audio sources and movement data into hybrid sounds that reflect the intersection between the gestural and vocal nuances of the performer. A physical model is a software simulation of a sonic object that replicates the sound producing mechanism (Kojs, Serafin, and Chafe 2007). In physical modelling synthesis, where the exciter and resonator of a vibrating object are separated, the exciter typically refers to an external source that injects energy into a vibrating object, called the resonator. The sonic interaction design builds on past artistic work that combines voice with physical models, notably Mauro Lanza’s composition Erba near che cresci segno nero tu vivi (1999), which explores links between music and language, fusing the operatic voice with extended cyberinstruments (Kojs, Serafin, and Chafe 2007). The result is a combination between voice, text, movement data, and modelled physical objects and excitation types, creating a hybrid sound that inherits qualities from both parents (Kojs, Serafin, and Chafe 2007).