Explore chapters and articles related to this topic
Total Exposure Hearing Health Preservation
Published in Kirk A. Phillips, Dirk P. Yamamoto, LeeAnn Racz, Total Exposure Health, 2020
Two important non-noise exposures affecting hearing health are ototoxic chemicals and pharmaceuticals. They could be considered different ends of the same spectrum. Any chemical agent in the body, which affects the nervous system, may have some effect on the auditory nerve. Several solvents and heavy metals have substantial evidence in toxicological and epidemiological studies that they have an ototoxic property. Pharmaceutical preparations such as aminoglycosidic antibiotics, loop diuretics, certain analgesics, antipyretics, and antineoplastic agents also have warnings that they may induce hearing loss (NIOSH and OSHA 2018). A worker co-exposed to noise and ototoxic chemicals should be considered as “hypersensitive” for NIHL. Also, some agencies, such as the US Army, advise that airborne exposure to ototoxins above 50% of the occupational exposure limit requires workers to receive annual audiograms as a measure of possible ototoxic health effects (US Army 2015). Further, the combination of impulse noise and ototoxins may be more injurious than either alone or the assumed additive effects (Pons et al. 2017).
Flexible Electronic Technologies for Implantable Applications
Published in Muhammad Mustafa Hussain, Nazek El-Atab, Handbook of Flexible and Stretchable Electronics, 2019
Loss of any sensory organ of the body leads to several challenges in one’s life. Hearing loss is one such example where it can be either conductive, sensorineural, or a combination of both. The mechanism of hearing is through conduction of bones that transfer sound waves to the inner diaphragm and nerves translate these signals to the brain. Hearing loss can be due to age (presbycusis) which is a conductive loss as sound signals are not able to pass to the inner ear from the outer ear. The sound diminishing factors can be stiffening of the eardrum, thereby losing its elasticity, loss of mobility of conducting bones becoming rigid in their action. Whereas the damage of inner ear sensory hair cells or auditory nerve damage causes sensorineural deafness. For all such cases, cochlear implants are used externally for external applications. Preliminary work on a cochlear implant had begun in the 1960s, the first neural interface designed to revive the hearing for clinically deaf representing the first successful integration of stimulation electrodes within the peripheral nervous system (PNS) (Macherey and Carlyon 2014; Jalili et al. 2017). Cochlear implants are essentially used to bypass the sound signals from hair cells by inserting electrodes into the inner ear and directly stimulating the auditory nerve in response to the incoming audio signal from a microphone placed in the external ear area.
The Environment
Published in Céline McKeown, Office Ergonomics and Human Factors, 2018
Acoustic waves can be described as fluctuations in pressure, or oscillations, in an elastic medium. The oscillations produce an auditory experience which is sound. This is achieved because the ear converts the acoustic waves into nerve impulses, which move to the brain along the auditory nerve. The brain processes this information and imposes some sense on it, resulting in perception of sound and identification of auditory patterns. How loud a sound is considered to be is determined by its frequency and its sound pressure level (SPL). Frequency refers to the complete number of cycles that occur in one second. It is expressed in Hertz (Hz) and gives the sensation of pitch. The amplitude of the sound wave corresponds to the intensity of the sound and provides the sensation of loudness. The human ear is normally sensitive to a range of frequencies between 20 and 20,000 Hz; this is referred to as the audible spectrum. We are likely to hear at our best between 1000 and 4000 Hz, which is the frequency band in which speech is transmitted. Auditory thresholds—the point at which we can actually hear something—are much lower at higher frequencies. Noises at lower frequencies have to be much louder to be heard. Noise is measured in decibels (dB). An A-weighting, written as dB(A), is used to measure average noise levels. A C-weighting, written as dB(C), measures peak, impact, or explosive noise. Table 9.1 indicates typical noise levels encountered in a number of situations.
Recent advances in neuromorphic transistors for artificial perception applications
Published in Science and Technology of Advanced Materials, 2023
Auditory system is one of the most important perceptual systems that can detect, process and store acoustic signal, making human information exchange more efficient and direct. The auditory organ is a functional unit to feel the sound wave in human body. With the help of auditory organ, we can obtain various kinds of sound information, so as to communicate with each other, avoiding enemies, capturing prey, and so on, which is of great significances to our daily life activities [113]. The generation of human auditory system is a complex biological process. Auditory receptors are the Corti’s organ in the basilar membrane of the cochlea [114]. When external object vibrates to produce sound, periodic changes in air pressure in the sound wave cause the eardrums to vibrate with accurate frequency and amplitude, correspondingly. These mechanical vibrations can be transmitted to the cochlear hair cells through the auditory bone and can be converted into electrical signals to cause auditory nerve impulses. Finally, these signals enter the auditory area of the cerebral cortex, resulting in human auditory sense. Furthermore, accurate sound location in nervous system of animals plays an important role in information exchange and intelligent activities. In our human neural system, sound location function is achieved by detecting the time differences between sound signals received by two ears. On the one hand, vibration is one of the most powerful communication modes for information sharing and azimuth state recognition. Therefore, patients with auditory system failure have serious auditory perception and verbal communication obstacles. In order to provide alternative treatment options, researchers in multiple interdisciplinary areas have been trying to develop low-cost, low-power, convenient and stable artificial auditory systems to help people with auditory obstacles to restore or improve their auditory perception ability [115,116]. On the other hand, flexible and stable artificial auditory system will give next generation of intelligent robots new vitality. With the artificial auditory system, the intelligent robots would have the ability to locate and track sounds. As would endow the robots decision-making ability based on hearing, speaking and understanding.
Modeling and simulation of cochlear perimodiolar electrode based on composite spring-mass model
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2022
Jianjun Li, Yue Wu, Jianye Zhuo, Zuo Wang
The cochlear implant is the most preferred method for the treatment of sensorineural hearing loss. It converts acoustic signals into electrical signals to directly stimulate the auditory nerve to improve the auditory function of the patient (Lenarz 2017). The perimodiolar electrode is a novel cochlear implant electrode device, which is pre-formed into a helix shape, with a guide wire inserted into the body to maintain the straightened state before surgery. This structure is then inserted into the human cochlea during the surgery as the guide wire is drawn out to restore its preset shape (Tykocinski et al. 2001). Although cochlear implant has been successfully used to restore the hearing of deaf people, it poses a risk of damage to the fragile structure of the cochlea during the electrode insertion. The electrode inserted into the cochlea is prone to yield and damage. Additionally, if the electrode is forcibly inserted while encountering resistance during the insertion process, it can damage the basilar membrane and destroy the residual hearing of the patient (Wardrop et al. 2005; Bas et al. 2016; Ramos-Macias et al. 2017; Ketterer et al. 2018). Therefore, highly qualified doctors are required for the application of the cochlear implant, who can quickly and accurately judge the insertion of the electrode array and skillfully operate surgical instruments during the surgery. This requires doctors to carry out extensive practice, actual combat, and reasonable pre-operative evaluation (Ma et al. 2017). In recent years, virtual reality technology has been widely used in the medical field, and various types of virtual surgery systems have been developed (Barber et al. 2018; Macmillan et al. 2018; Shono et al. 2018), which have played an important role in the training of doctors and in pre-operative evaluation. Combining cochlear perimodiolar electrode implantation surgery with virtual reality technology to realize a perimodiolar electrode inserted in the cochlea in a virtual environment can help doctors identify the optimal insertion path, preview the surgical process, and change the traditional model of designing surgical plans based on subjective experience. This process considerably improves the safety of the cochlear implant and reduces complications (Copson et al. 2017).
Implementation of optimised wavelet packet algorithms for audibility enhancement
Published in Australian Journal of Electrical and Electronics Engineering, 2020
Jayant J. Chopade, N. P. Futane
Physiologically, the cochlear, a part of the inner ear, is responsible for the filtering of the frequencies resulting in frequency selectivity. The different frequency components are created out of the complicated signal and can cause a peak in the pattern of vibration at a particular spot inside the cochlear on the cilia inside the basilar membrane. Independent coding of these components takes place at the auditory nerve, which is responsible for transmitting these signals to the brain. The individual coding takes place if the frequency components are significantly different. Otherwise, these signals are coded at the same place and are treated as one sound only (Moore 1997). Auditory masking is grouped according to the existence of a non-simultaneous masker which take place when the signal and masker are asynchronous. Further, it is divided into backward masking and forward masking. Forward masking is specified as an instance when the masker is preceded by the signal. Similarly, backward masking takes place when the masker follows the signal. Another type of masking, known as simultaneous masking, appears due to similar frequencies of the signals. Result of such masking is that the sound is made inaudible (Moore 1997). At the point, when the masker and the signal are at similar frequencies, it brings about maximum masking. Masking decreases when the difference between the frequencies of the signal and the masker increases. Simultaneous masking reduces the frequency resolution significantly. When compared to the non-simultaneous masking, simultaneous masking is more severe (Rabiner and Schafer 1978). The information in speech signals is divided and given to the ears in complementary patterns to give some relaxation for sensory cells of the basilar membrane. It may benefit in decreasing the impact of increased masking and in this way improve the speech reception in cases of bilateral sensorineural people (Chopade and Futane 2015; Chaudhari and Pandey 1998). The modified wavelet packet tree algorithm (Kolte and Chaudhari 2010; Chopade and Futane 2015; Chaudhari and Pandey 1998; Chopade and Futane 2016; Kulkarni and Pandey 2008; 1991; Zwicker 1961) shows significant improvement in recognition scores for processed scheme of wavelet packets from 3.33% to 22.23%. The objective of this investigation was to divide the speech signals with the help of optimised wavelet packets to form dichotic complementary bands resulting to eliminate the problem of auditory masking with minimal number of channels. A similar investigation using a biorthogonal wavelet family was done in (Chopade and Futane 2016). In this work, eight bands possessing the bandwidth known as quasi octave bandwidth were developed (Zwicker 1961; Baskent 2006). The even-odd dichotic presentation consisted of four alternate bands.