Explore chapters and articles related to this topic
Hearing protection and communication
Published in Nicholas Green, Steven Gaydos, Hutchison Ewan, Edward Nicol, Handbook of Aviation and Space Medicine, 2019
Nicholas Green, Steven Gaydos, Hutchison Ewan, Edward Nicol
Objective tests include Articulation Index (AI) and Speech Transmission Index (STI): Offer predictive methods for assessing probable speech intelligibility from direct measurement of noise and speech signals.Assess the relative contributions different parts of speech frequency spectrum make to overall intelligibility of the speech signal.
Hearing Aids
Published in John C Watkinson, Raymond W Clarke, Christopher P Aldren, Doris-Eva Bamiou, Raymond W Clarke, Richard M Irving, Haytham Kubba, Shakeel R Saeed, Paediatrics, The Ear, Skull Base, 2018
Hearing aid fittings can be evaluated by speech testing (for those over 3 years of age), paired-comparison preference testing (for those over 6 years of age), and subjective reporting by the child, the parents or the teachers, whether informally or using systematic methods such as Parents’ Evaluation of Aural/Oral Performance of Children (PEACH) and Meaningful Auditory Integration Scale (MAIS). The audibility of speech can be estimated by calculating the articulation index (also known as the speech intelligibility index (SII)) or assessed by measuring the presence, latency and perhaps morphology of the cortical responses elicited by speech sounds. The availability of speech to the child can be indirectly assessed by measuring the child’s language development.
Speech and its perception
Published in Stanley A. Gelfand, Hearing, 2017
Speech intelligibility under given conditions can be estimated or predicted using a number of acoustical methods, such as the articulation index and the speech transmission index. The articulation index (AI) was introduced by French and Steinberg (1947). The AI estimates speech intelligibility by considering how much of the speech signal is audible above the listener's threshold as well as the signal-to-noise ratio. In its original formulation, the basic concept of the AI involves the use of 20 contiguous frequency bands, each of which contributes the same proportion (0.05% or 5%) to the overall intelligibility of the message. These bands are then combined into a single number from 0 to 1.0, which is the articulation index.
Heavy or semi-heavy tail, that is the question
Published in Journal of Applied Statistics, 2021
Jamil Ownuk, Hossein Baghishani, Ahmad Nezakati
The data, in this example, include an actual application case of 50 noise samples from forklifts. The subjective evaluation index (annoyance) was divided into ten grades, in which the noise annoyance was further subdivided into five parts from low to high, and each part had two grades, as shown in Table 1 of Zhang et al. [37]. Similar to Zhang et al. [37], we selected nine objective parameters as the objective evaluation indexes: Linear sound pressure level (LSPL), A-weighted sound pressure level (ASPL), loudness, sharpness, roughness, fluctuation, tonality, articulation index (AI), and impulsiveness. The details of the data are given in Table 2 of Zhang et al. [37]. These data include one (or even two) extreme observations (Figure 2 in Zhang et al. [37], not reported here). Hence, it means that it may not be relevant to assume Gaussian errors for the regression model. By this feature, we considered the proposed different families for error distribution.
Motor speech and non-motor language endophenotypes of Parkinson’s disease
Published in Expert Review of Neurotherapeutics, 2019
Michelle Magee, David Copland, Adam P. Vogel
Our review noted five studies reporting speech in PD prior to symptomatic treatment (Table 2). Changes in acoustic measures of phonation (increased jitter, shimmer and NHR; decreased HNR [54,70]), articulation (decreased DDK rate, relative intensity range variation (RIRV), first autocorrelation coefficient of F2 contour (RFPC), spectral distance changes (SDCV), triangular vowel articulation space (tVSA), vowel articulation ratio of F2 frequencies/i/and/u/(F2i/F2u) and vowel articulation index (VAI); increased robust relative intensity slope (RRIS) and F2 frequencies of corner vowels/u/(F2u) [54,88]) and prosody (increased F0 and number of pauses; decreased F0 SD and intensity SD [54,70,72,89]) are described in early PD.
Functionality of hearing aids: state-of-the-art and future model-based solutions
Published in International Journal of Audiology, 2018
Birger Kollmeier, Jürgen Kiessling
All these models utilise a comparatively simple binaural circuit, the equalisation and cancellation (EC)-mechanism according to Durlach (1963) as the central binaural noise reduction mechanism: in the equalisation (E) stage, the noise (or in some implementations the complete mixture of signal and noise) is matched on both binaural channels by an appropriate interaural amplification (factor alpha) and an interaural time shift (parameter tau). In the subsequent cancellation (C) step, both channels are subtracted from each other, thus yielding a more or less complete elimination of the background noise (especially, if the same noise is present at both ears) and an amplification of the target signal. The improvement in signal-to-noise ratio and, equivalently in SRT is dependent on the binaural configuration. It is limited by some statistical fluctuations of the parameters alpha and tau. These statistical inaccuracies effectively form some kind of residual noise at the central/neural level according to (vom Hövel, 1984). The combination of the EC mechanism and the speech intelligibility prediction method (such as, the articulation index or the speech intelligibility index) can then be used to predict SRT values for different binaural conditions in normal listeners (vom Hövel, 1984; Zurek, 1990; Beutelmann & Brand, 2006).