Explore chapters and articles related to this topic
Spatial sound reproduction by wave field synthesis
Published in Bosun Xie, Spatial Sound, 2023
Wave field synthesis (WFS) is another sound field-based technique and system that aims to reconstruct a target sound field in an extended region. In this chapter, WFS is described in detail and analyzed within the framework of the general formulations of sound field reconstruction with multiple secondary sources. In Section 10.1, the basic principle and method, i.e., the traditional analyses of WFS, are presented. In Section 10.2, the general theory of WFS is discussed from the point of mathematical and physical analysis. In Section 10.3, the characteristics of WFS, especially spatial aliasing caused by a discrete secondary source array, are analyzed in the spatial spectrum domain. In Section 10.4, the relationship among acoustical holography, WFS, and Ambisonics is described. In Section 10.5, WFS equalization under nonideal conditions is discussed.
From immersion to collaborative embodiment: extending collective interactivity in built environments through human-scale audiovisual immersive technologies
Published in Digital Creativity, 2020
Mincong Huang, Samuel Chabot, Ted Krueger, Carla Leitão, Jonas Braasch
It is possible to recreate an inhomogeneous, sweet-spot-free sound field in the CRAIVE-Lab over the whole user area using the spatial audio rendering technique wave-field synthesis, A rapid 2D ray tracing algorithm developed for this method of acoustic holography, simulates a spatially accurate sound field – see Figure 3 (Blauert and Braasch 2020). From this, individual room-impulse responses can be extracted for all 128 loudspeakers installed along the circumference of the CRAIVE-lab. The sound of music ensembles can be reproduced spatially correct, including direct sounds, specular reflections, and diffuse reverberation. This way, concert venues and other acoustical enclosures can be simulated in great detail, allowing an acoustical walk-through in virtual buildings to experience the varying acoustics in different locations. Some of the simulated venues, such as the simulated Dan Harpole Cistern (Fort Worden, WA) can be played with a live music ensemble (see Figure 5) and has entertained at several conferences with musicians across abilities including our annual International Symposium for Assistive Technology for Music and Arts (ISATMA). Recreating within the enclosed volume the sound field of the extended visual environment brings the without within and blends the boundary between the three typologies (Figure 4).
Reflections on sonic digital unreality
Published in Digital Creativity, 2019
Sara Pinheiro, Matěj Šenkyřík, Jiří Rouš, Petr Zábrodský
Small adjustments to the distribution can make the listening experience feel unreal. Within the possibilities of multichannel spatialization, the distribution is directly connected to the particular position of loudspeakers in the space. For example, the ambisonic system places the loudspeakers (ideally) in a hemisphere, while Wave Field Synthesis uses arrays of loudspeakers in which the spatialization is achieved by reconstructing the frontal wave of the sound event. Displacing the loudspeakers from this linear array or switching the channels can suffice to completely break a credible space illusion. Furthermore, a different trajectory can be created by switching the linearity of a few channels. Such will provide a movement which can be perceived as unreal. At the same time, the physical sound spatialization (loudspeaker placement) defines how the sound travels to the listener’s ears. It is possible to enhance the displacement by introducing delays between the different channels playing the same sound source. If the same sound is played on multiple loudspeakers at once, the delay between them will determine the sound’s localization. According to tests made by Wallach, Newman, and Rosenzweig (1949) and Haas (1972), the precedence effect works with a delay below 30 ms. In this case, the two different positions merge into one. When the delay is above 30 ms, these positions become distinguishable. The sounds are perceived separately or as echo. Thus, using short delays between the channels can manipulate the listener’s perception of the space.