A Multi-Tier Framework for Understanding Spoken Language
Steven Greenberg, William A. Ainsworth in Listening to Speech, 2012
This chapter summarizes a broad range of data, consistent with the primacy of hearing for shaping the principal properties of spoken language. It highlights how information acts as a controlling factor in defining many properties of spoken language. Language can be approached from many different vantage points-neuroanatomy, psychology, phonetics, hearing, vision, physics, information theory, and formal logic. Certain languages, such as Spanish, lend themselves easily to Roman orthography; these tongues have a relatively transparent grapheme-to-phoneme relationship-words are pronounced pretty much as they are spelled and with some measure of consistency. Multitier theory turns the conventional phonetic framework on its head. Multitier theory predicts this, as it interprets the McGurk effect as the consequence of inherently ambiguous acoustic cues. Linking the information-processing component of speech communication with its biological foundations is likely to form the focus of spoken language research over the coming decades.
The influence of visual and auditory information on the perception of speech and non‐speech oral movements in patients with left hemisphere lesions
Published in Clinical Linguistics & Phonetics, 2009
Gabriele Schmid, Anke Thielmann, Wolfram Ziegler
Patients with lesions of the left hemisphere often suffer from oral‐facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands audiovisual processing both in speech and language treatment and in the diagnosis of oral‐facial apraxia. The purpose of this study was to investigate differences in audiovisual perception of speech as compared to non‐speech oral gestures. Bimodal and unimodal speech and non‐speech items were used and additionally discordant stimuli constructed, which were presented for imitation. This study examined a group of healthy volunteers and a group of patients with lesions of the left hemisphere. Patients made substantially more errors than controls, but the factors influencing imitation accuracy were more or less the same in both groups. Error analyses in both groups suggested different types of representations for speech as compared to the non‐speech domain, with speech having a stronger weight on the auditory modality and non‐speech processing on the visual modality. Additionally, this study was able to show that the McGurk effect is not limited to speech.
Auditory-visual speech perception in an adult with aphasia
Published in Brain Injury, 2004
Kathleen M. Youse, Kathleen M. Cienkowski, Carl A. Coelho
The evaluation of auditory-visual speech perception is not typically undertaken in the assessment of aphasia; however, treatment approaches utilise bimodal presentations. Research demonstrates that auditory and visual information are integrated for speech perception. The strongest evidence of this cross-modal integration is the McGurk effect. This indirect measure of integration shows that presentation of conflicting tokens may change perception (e.g. auditory /bi/ + visual /gi/ = /di/). The purpose of this study was to investigate the ability of a person with mild aphasia to identify tokens presented in auditory-only, visual-only and auditory-visual conditions. It was hypothesized that performance would be best in the bimodal condition and that presence of the McGurk effect would demonstrate integration of speech information. Findings did not support the hypotheses. It is suspected that successful integration of AV speech information was limited by a perseverative response pattern. This case study suggests the use of bisensory speech information may be impaired in adults with aphasia.
Associative memory properties of multiple cortical modules
Published in Network: Computation in Neural Systems, 1999
Alfonso Renart, Néstor Parga, Edmund T Rolls
The existence of recurrent collateral connections between pyramidal cells within a cortical area and, in addition, reciprocal connections between connected cortical areas, is well established. In this work we analyse the properties of a tri-modular architecture of this type in which two input modules have convergent connections to a third module (which in the brain might be the next module in cortical processing or a bi-modal area receiving connections from two different processing pathways). Memory retrieval is analysed in this system which has Hebb-like synaptic modifiability in the connections and attractor states. Local activity features are stored in the intra-modular connections while the associations between corresponding features in different modules present during training are stored in the inter-modular connections. The response of the network when tested with corresponding and contradictory stimuli to the two input pathways is studied in detail. The model is solved quantitatively using techniques of statistical physics. In one type of test, a sequence of stimuli is applied, with a delay between them. It is found that if the coupling between the modules is low a regime exists in which they retain the capability to retrieve any of their stored features independently of the features being retrieved by the other modules. Although independent in this sense, the modules still influence each other in this regime through persistent modulatory currents which are strong enough to initiate recall in the whole network when only a single module is stimulated, and to raise the mean firing rates of the neurons in the attractors if the features in the different modules are corresponding. Some of these mechanisms might be useful for the description of many phenomena observed in single neuron activity recorded during short term memory tasks such as delayed match-to-sample. It is also shown that with contradictory stimulation of the two input modules the model accounts for many of the phenomena observed in the McGurk effect, in which contradictory auditory and visual inputs can lead to misperception.
Related Knowledge Centers
- Hearing
- Vision
- Corpus Callosotomy
- Lesion
- Specific Language Impairment
- Alzheimer's Disease
- Corpus Callosum