Explore chapters and articles related to this topic
Sound field, spatial hearing, and sound reproduction
Published in Bosun Xie, Spatial Sound, 2023
Another function of the inner ear is to convert mechanical oscillation into neural pulses or activities. Hair cells are found between the basilar membrane and the tectorial membrane. When the basilar membrane moves up and down, a shearing motion is created between the basilar membrane and the tectorial membrane. Consequently, the stereocilia on the surface of the hair cells bend, causing potassium ions to flow into hair cells and altering the potential difference between the inside and outside of hair cells. In turn, the release of neurotransmitters is induced, and action potentials in the neurons of the auditory nerve are initiated. In this way, hair cells convert the oscillation of the basilar membrane into neural pulses, and the neural system then conveys neural pulses to the high-level system for further processing. However, the conversion mechanism of the inner ear is complex and beyond the scope of this book. More details can be found in textbooks on physiological acoustics (Gelfand, 2010).
Hearing, Sound, Noise, and Vibration
Published in R. S. Bridger, Introduction to Human Factors and Ergonomics, 2017
Since the cochlea is sealed, inward movement of the stapes pushes the oval window inward and causes a column of fluid to move inward into the cochlea. At the other end of the cochlea, the round window moves outward. Fibers in the basilar membrane run along its length projecting outward from the center of the cochlea. At the base of the cochlea, close to the oval and round windows, these fibers are short and thick. Toward the apex, they become longer and thinner. Sound vibrations at a particular frequency are transmitted by the stapes and cause motion in the fluid column in the cochlea. This sets up a traveling wave in the cochlea. At a certain point along the cochlea, the basilar membrane is just the right thickness to resonate at the particular frequency of the cochlear wave. At this point along the length of the cochlea, maximum vibratory amplitude will occur.
Voice Subtitle Transmission in the Marine VHF Radiotelephony
Published in Adam Weintrit, Marine Navigation, 2017
This phenomenon is explained by anatomy and physiology of inner ear. In the inner ear a snailshaped formation cochlea is placed. In the cochlea there is basilar membrane which makes significant signal processing in converting sound signal into neural stimuli. Different local areas along basilar membrane resonate to own sound frequency. This leads to a tonotopic organization of the sensitivity to frequency ranges along the membrane, which can be modeled as being an array of overlapping band-pass auditory filters. The auditory filters are associated with points along the basilar membrane and determine the frequency selectivity of the cochlea, and therefore the listener’s discrimination between different sounds. The bandwidth of the auditory filter is just coincides with appropriate critical band. The auditory filter responds more likely on total sound energy in critical band than on spectrum details.
Evaluating the effect of multi-sensory stimulations on simulator sickness and sense of presence during HMD-mediated VR experience
Published in Ergonomics, 2021
Simone Grassini, Karin Laumann, Virginia de Martin Topranin, Sebastian Thorp
The auditory system is also widely involved in exploring and navigating through the environment; the ability to correctly localise sounds is an important tool to explore real and virtual environments. Through the basilar membrane in the cochlea, sound waves are converted into mechanical signals and then to electrical signals that various brain centres use to compare with incoming signals from both ears and localise the sounds. As an example, it has been shown that blind people can construct coherent spatial mental maps by using just virtual navigation with acoustic information (Picinali et al. 2014). Even though auditory cues are not part of the sensory conflict theory, there are evidences that auditory cues may contribute to SS (Keshavarz et al. 2014).
A hybrid approach for speech enhancement using Bionic wavelet transform and Butterworth filter
Published in International Journal of Computers and Applications, 2020
BWT, a time-adaptive wavelet transform as stated by Morlet wavelet specifically designed for modeling the vocal signals of the human. This is in line with the Giguere–Woodland nonlinear transmission line model of auditory system. The BWT was enhanced in line with the active control mechanism of cochlea on speech process. Cochlea, planted within the internal ear, has a noteworthy role to decode the spectral information amidst speech. While the sound is transmitted into the cochlea, it provides the vibration of basilar membrane (BM) in the cochlea. Every position with the BM answers to a sound frequency with a huge displacement, hence BM performs as a separator physiological frequency. BM is designed via a string of filters. Anyhow, the variant from wavelet transform (WT) filters with an invariant quality factor, Where and are the core of the frequency and also bandwidth of the filter, cochlea can alter its Qf factors adaptively on a dynamic range. Such alteration will improve the tuning characteristic of cochlea, also effect in a greater frequency sensitivity and selectivity for the speech signal. Few models are utilized for illustrating the adaptive modification of cochlear filter by the adjustment of acoustic resistance Re and also compliance C of the BM. Here, the parallel Re and C adjustments are embraced as And Where ds(x,t) is the displacement of the BM at position x and time t, is its first-order differential, Gc1 and Gc2 signify two active factors of cochlea, and Re(x) and C(x)(in Equation (7)) stands as the passive BM acoustic resistance together with compliance, respectively. Therefore, the Qf factor of cochlear filter can well be presented as Where is a saturation factor.
Implementation of optimised wavelet packet algorithms for audibility enhancement
Published in Australian Journal of Electrical and Electronics Engineering, 2020
Jayant J. Chopade, N. P. Futane
Physiologically, the cochlear, a part of the inner ear, is responsible for the filtering of the frequencies resulting in frequency selectivity. The different frequency components are created out of the complicated signal and can cause a peak in the pattern of vibration at a particular spot inside the cochlear on the cilia inside the basilar membrane. Independent coding of these components takes place at the auditory nerve, which is responsible for transmitting these signals to the brain. The individual coding takes place if the frequency components are significantly different. Otherwise, these signals are coded at the same place and are treated as one sound only (Moore 1997). Auditory masking is grouped according to the existence of a non-simultaneous masker which take place when the signal and masker are asynchronous. Further, it is divided into backward masking and forward masking. Forward masking is specified as an instance when the masker is preceded by the signal. Similarly, backward masking takes place when the masker follows the signal. Another type of masking, known as simultaneous masking, appears due to similar frequencies of the signals. Result of such masking is that the sound is made inaudible (Moore 1997). At the point, when the masker and the signal are at similar frequencies, it brings about maximum masking. Masking decreases when the difference between the frequencies of the signal and the masker increases. Simultaneous masking reduces the frequency resolution significantly. When compared to the non-simultaneous masking, simultaneous masking is more severe (Rabiner and Schafer 1978). The information in speech signals is divided and given to the ears in complementary patterns to give some relaxation for sensory cells of the basilar membrane. It may benefit in decreasing the impact of increased masking and in this way improve the speech reception in cases of bilateral sensorineural people (Chopade and Futane 2015; Chaudhari and Pandey 1998). The modified wavelet packet tree algorithm (Kolte and Chaudhari 2010; Chopade and Futane 2015; Chaudhari and Pandey 1998; Chopade and Futane 2016; Kulkarni and Pandey 2008; 1991; Zwicker 1961) shows significant improvement in recognition scores for processed scheme of wavelet packets from 3.33% to 22.23%. The objective of this investigation was to divide the speech signals with the help of optimised wavelet packets to form dichotic complementary bands resulting to eliminate the problem of auditory masking with minimal number of channels. A similar investigation using a biorthogonal wavelet family was done in (Chopade and Futane 2016). In this work, eight bands possessing the bandwidth known as quasi octave bandwidth were developed (Zwicker 1961; Baskent 2006). The even-odd dichotic presentation consisted of four alternate bands.