Explore chapters and articles related to this topic
Binaural and spatial hearing
Published in Stanley A. Gelfand, Hearing, 2017
Fundamentally, auditory scene analysis involves grouping the sounds impinging on the listener's ears into perceptual units called streams based on certain criteria or grouping principles. For example, the grouping factors of proximity and similarity pertain to how close or far apart sounds are in terms of their physical parameters. In other words, sounds tend to be grouped together when they are close and/or similar with respect to parameters such as frequency, spectral shape, timing, harmonicity (harmonics of a common fundamental frequency), intensity, and direction or spatial origin. On the other hand, sounds that are far apart or dissimilar in terms of these parameters tend not to be grouped together, but are perceived as separate streams. This is illustrated in Figure 13.2, which represents the perception of alternating higher- and lower-frequency tones. In the upper frame, the two tones are relatively close in frequency, and they are perceptually grouped into a single stream of alternating pitches (ABABABAB) that is heard to be coming from the same sound source. However, when the two tones are far apart in frequency, they are heard as two separate streams of interrupted tones (A … A … A … A and B … B … B … B) coming from different sound sources, as in the lower frame. The former case illustrates stream fusion (or stream integration), and the latter case is stream segregation.
The effects of auditory object identification and localization (AOIL) training on noise acceptance and loudness discomfort in persons with normal hearing
Published in Speech, Language and Hearing, 2019
K. L. Bees, D. Guan, N. Alsarrage, G. D. Searchfield
AOIL training has been used with participants with tinnitus and reduced the amount of masking required to attenuate the perception of tinnitus (Searchfield et al., 2007). It was proposed that the mechanism underlying this effect was a change in auditory scene analysis and selective auditory attention with auditory training (Searchfield et al., 2007). Changes in auditory scene analysis and auditory attention may also underlie the effect of AOIL training seen on ANLs. Figure-ground separation in auditory scene analysis shares some similarities with the ANL. The ANL is measured by asking the participant to report the level of noise they are willing to accept while listening to a passage clearly and comfortably – this involves directing attention to the passage of interest (figure) and separating it from the background noise (ground) (Teki, Chait, Kumar, von Kriegstein, & Griffiths, 2011). Brännström, Zunic, Borovac, and Ibertsson (2012) reported that working memory capacity has a strong association with ANLs and BNLs, a finding in line with the proposed mechanism of effect on selective auditory attention that AOIL training has on ANL (Brännström et al., 2012). Hence, a reduction in ANL may be due to an improvement in figure-ground separation and selective auditory attention after AOIL training.
Development and psychometric properties of the sound preference and hearing habits questionnaire (SP-HHQ)
Published in International Journal of Audiology, 2018
Markus Meis, Rainer Huber, Rosa-Linde Fischer, Michael Schulte, Jan Spilski, Hartmut Meister
The Need for Cognitive Closure (NCC) paradigm (Webster & Kruglanski, 1994) might be a promising psychological construct as a candidate for profiling people in different strategies to get information from the acoustical environment. Following the NCC paradigm, a person with a high need for closure prefers structure and predictability, is decisive and closed-minded, and is uncomfortable with ambiguity (van Hiel et al., 2003). Someone rating low on NCC will express more fluidity and creative acts, and openness for new situations. We had the assumption that the construct of NCC is related to listening situations, e.g. a group of people focusing more on speech relevant aspects in communication episodes and tuning out other environmental sounds, whereas other want to monitor the whole acoustical environment. These aspects will also reflect aspects of spatial awareness and direction regarding speakers in a complex auditory scene. The Auditory Scene Analysis (Bregman, 1990) described how one turns massive amounts of individual sounds into usable information and the cognitive activity of linking individual sounds over time to a common source. The cognitive and information seeking aspects regarding the environment of the NCC approach may be a fruitful one for hearing habits.
Measuring the effect of adaptive directionality and split processing on noise acceptance at multiple input levels
Published in International Journal of Audiology, 2023
Francis Kuk, Christopher Slugocki, Neal Davis-Ruperto, Petri Korhonen
Communication in noise is a major challenge for listeners with a hearing loss. Noise masks the audibility of primitive cues used in auditory scene analysis (ASA, Bregman 1990) and degrades identification of auditory objects. Noise also adds to the listeners’ annoyance of the listening situations (Cohen and Weinstein 1981) and compromises ease of communication. Modern hearing aids (HAs) address this speech-in-noise (SIN) problem using technologies such as directional microphones (DIRMs) and noise reduction (NR) algorithms. Whereas DIRMs work on the assumption of spatial separation between speech and noise sources, NR algorithms use differences in modulation characteristics and the assumption of spectral separation between speech and noise signals to minimise the effect of noise (Chung 2004). In traditional HA designs, sounds are first processed by the microphone system followed by compression and NR. Because current microphone systems only provide a single signal stream on which NR and compressor may act, the same compression and NR setting will be applied to the post-microphone signals regardless of any spatial separation between the speech and noise signals in the original input. This design limits possible further improvements to the final SNR. For example, a compressor cannot increase gain for a soft speech signal originating from the front while simultaneously reducing gain for loud noise originating from the back. Similarly, NR cannot selectively reduce the gain in frequency channels determined to be noisy without also affecting the gain applied to speech frequencies that overlap with those channels, even if speech and noise sources are spatially separated.