Explore chapters and articles related to this topic
Acoustic conditions and requirements for the subjective assessment and monitoring of spatial sound
Published in Bosun Xie, Spatial Sound, 2023
Headphone monitoring is not influenced by the reflections of a listening room, so conditions in an acoustic environment for a monitor can be loosened. However, as stated in Section 11.9.1, the headphone monitoring or presentation of stereophonic and multichannel sound signals directly leads to incorrect binaural information; as such, it is inappropriate for evaluating spatial attributes (such as virtual source localization). The method of binaural or virtual monitoring through headphones can be used to address this problem. Similar to the cases in Sections 11.6 and 11.9.1, HRTF-based binaural synthesis is used to create virtual loudspeakers in headphone presentation, and stereophonic or multichannel sound signals are monitored using virtual loudspeakers. If necessary, reflections of listening rooms can be involved in binaural synthesis to simulate the perceived effect on a practical listening room.
Cloud VR Terminals
Published in Huaping Xiong, Dawei Li, Kun Huang, Mu Xu, Yin Huang, Lingling Xu, Jianfei Lai, Shengjun Qian, Cloud VR, 2020
Huaping Xiong, Dawei Li, Kun Huang, Mu Xu, Yin Huang, Lingling Xu, Jianfei Lai, Shengjun Qian
Hearing is an important sense second only to our vision and plays an increasingly important role in VR immersion. Dummy head recording (DHR) can truly simulate the human ear’s response to sound localization and frequency. This technology produces three-dimensional data with fixed content and direction. Users often have to move their head when experiencing VR content, and the sound source should change to match. This provides better stereo immersion. A head-related transfer function (HRTF) is required to ensure visual and auditory consistency, so as to implement a realistic sound direction and distance effect, and simulate sound atmospheres and phenomenon, including reflection, blocking, isolation, and reverberation. Dolby Laboratories, Facebook, and Google have all built immersive VR experiences that meet these auditory and acoustic requirements.
Localisable Alarms
Published in Neville A. Stanton, Judy Edworthy, Human Factors in Auditory Warnings, 2019
The final main piece of information processed by the brain regarding sound localisation is called the head-related transfer function (HRTF) (Carlile and King, 1993). The HRTF refers to the effect the external ear has on sound. As a result of passing over the bumps or convolutions of the pinna, the sound is modified so that some frequencies are attenuated and others are amplified. Although there are certain generalities in the way the sound is modified by the pinnae, the HRTF of any one person is unique to that individual. The role of the HRTF is particularly important when we are trying to determine whether a sound is immediately in front of, or directly behind, us. In this instance the timing and intensity differences are negligible and there is consequently very little information available to the central nervous system on which to base a decision of ‘in front’ or ‘behind’. So, to locate the direction of a sound source, the larger the frequency content, to overcome the ambiguities inherent to single tone sounds, the better the accuracy.
Exploration of Head Related Transfer Function and Environmental Sounds as a Means to Improve Auditory Scanning for Children Requiring Augmentative and Alternative Communication
Published in Assistive Technology, 2020
John W. McCarthy, Jeffrey J. DiGiovanni, Dennis T. Ries, Jamie B. Boster, Travis L. Riffle
It is possible to encode spatial cues onto recorded sounds via overlay of a head-related transfer function (HRTF). The HRTF is a sound filtering mechanism created by the head, torso, and pinnae that helps us to localize sounds in space. When sound waves arrive at the body, the head/torso and pinnae (mostly for high-frequency sounds) modify the sound waves to create new enhanced spectral peaks and valleys for the sound that reaches the ear drum. The pattern of spectral peaks and valleys that is created is dependent upon the direction of the sound source relative to the body; sounds coming from different directions will create different spectral patterns. These spectral patterns are then interpreted and utilized to help determine the location of the sound in space (Ahveninen, Kopčo, Jääskeläinen, 2014; Moore, 2003). Previous evidence indicates that adult computer users who are blind prefer spatially based rather than menu/hierarchical interfaces (Wersényi, 2010). Current technology makes it easier to encode sound location-based information onto recorded samples (Zhang, Abhayapala, Kennedy, & Duraiswami, 2010).
Auditory scene reproduction for tele-operated robot systems
Published in Advanced Robotics, 2019
Chaoran Liu, Carlos Ishi, Hiroshi Ishiguro
Another strategy is to synthesize binaural sounds from a monaural source and play them with stereo headphones. A sound from a point in space is received differently by the two ears. These auditory differences include sound pressure level and arrival times. It is generally accepted that humans perceive the spatial position of a sound source on the basis of these differences between the two ears [18,19]. Head-Related Transfer Functions (HRTF), which refers to functions representing the relationship between the sound source position and the sound wave property at the two eardrums of a particular person [19], can be used for synthesizing the binaural sounds to be reproduced with headphones.