Explore chapters and articles related to this topic
Engaging Young Children in Speech and Language Therapy via Videoconferencing
Published in Christopher M. Hayre, Dave J. Muller, Marcia J. Scherer, Everyday Technologies in Healthcare, 2019
Stuart Ekberg, Sandra Houen, Belinda Fisher, Maryanne Theobald, Susan Danby
An estimated 9.92% of children meet contemporary diagnostic criteria for a language disorder (Norbury et al., 2016). Without intervention, children who have persistent language difficulties are more likely to encounter challenges in education and employment in later life (Conti-Ramsden et al., 2018). These problems can continue into adulthood and encompass reading, writing, focusing, thinking, calculating, communicating, mobility, self-care, education, employment and interpersonal relationships with acquaintances, family and authority figures (McCormack et al., 2009, 2011). Language difficulties affect children’s peer relationships, as they may not be able to be understood or understand what others are saying to them, which may cause difficulties with social interactions. The effects of language difficulties can increase the likelihood of mental illness and reduce quality of life (van den Bedem et al., 2018; Eadie et al., 2018). Childhood language difficulties are also associated with increased healthcare costs in childhood (Cronin et al., 2017). Identifying and addressing speech and language difficulties in early childhood is one proactive way to address the diverse implications of these difficulties. Everyday technologies, such as videoconferencing, afford opportunities to promote access to specialist services for the treatment of these difficulties. Four types of technologies are used for current telehealth practice. These range from technologies that enable synchronous (i.e. ‘real time’) interaction, asynchronous (sometimes referred to as ‘store and forward’) interaction, remote patient monitoring and mobile health. The chapter focuses on synchronous audio-visual communication technologies, which are best suited for the clinical treatment of young children (Mashima and Doarn, 2008; Wilson et al., 2002; Dunkley et al., 2010).
Predicting developmental language disorders using artificial intelligence and a speech data analysis tool
Published in Human–Computer Interaction, 2023
Eleonora Aida Beccaluva, Fabio Catania, Fabrizio Arosio, Franca Garzotto
In children with atypical development, language acquisition is challenging, and the difference between subjects is remarkably broader when compared to TD children (Tager-Flusberg, 1999). According to the literature, almost 7% of young children experience a language disorder in their life (Black et al., 2015; Evans & Brown, 2016). Among them, Developmental Language Disorder (DLD) affects about 1 out of 15 (Clegg et al., 2005). The term DLD is recent and was first coined in 2017. This disorder was previously referred to as Specific Language Impairment (SLI) (Bishop, 2014) and the two terms often overlap, although they are not synonyms. DLD subjects have persistent and significant language impairments (Leonard, 2014) and, although every child is unique, some specific deficits are more common in this population. Problems in language development might persist through school age (or beyond) and occur in the absence of autism spectrum disorder, intellectual disability, or known biomedical conditions (Clegg et al., 2005). DLD has a significant impact on everyday social interactions and educational progress (American Psychiatric Association, 2013). Children with DLD have trouble using grammar and speech sounds and have a reduced vocabulary. They face difficulties in semantic, pragmatic, verbal memory, and phonology (Spencer, 2013). For many children with DLD, understanding language (receptive language) is also challenging, even if this may not be evident until a clinical assessment (Rapin, 1996). There are some benchmarks for children with DLD that can act as red flags: at one year of age they do not properly react to speech sounds or produce them; at two years they have a limited vocabulary and make minimal attempts to communicate by gestures; at three years they have limited use of language and speech is often unintelligible; between four and five years they produce short sentences with a small number of words and they experience difficulties in answering questions and telling stories with adults and in peer interactions (Visser-Bochane et al., 2017).
Identifying features of apps to support using evidence-based language intervention with children
Published in Assistive Technology, 2020
Although no single mechanism can be attributed to language disorders, it is postulated that children with language disorders have difficulty with the way they process auditory and visual information and represent the information as a cognitive process (Gillam, Hoffman, Marler, & Wynn-Dancy, 2002;Vugs, Knoors, Cuperus, Hendriks, & Verhoeven, 2016). To facilitate the processing of visual and auditory information, multimedia resources such as apps are being incorporated into teaching and learning. Recent advances have shown that multimedia learning is more effective than traditional, uni-modal learning (Mayer, 2017). In order to account for the cognitive load that impacts on multimedia learning, Mayer and Moreno (2003) have proposed a cognitive theory of multimedia learning. This theory is based on three underlying assumptions: (a) verbal and visual information are processed separately, (b) there is a limited amount of processing capacity available in the verbal and visual channels, and (c) learning requires active cognitive processing in the verbal and visual channels. In order to reduce the extraneous cognitive load, and the demands on working memory, Mayer (2003) and Mayer and Moreno (2003) have identified a number of principles for multimedia learning. Multimedia principle: Students learn better from words and pictures than from words alone.Spatial contiguity principle: Students learn better when corresponding words and pictures are presented near to one another.Temporal contiguity principle: Students learn better when corresponding words and pictures are presented simultaneously rather than successively.Coherence principle: Students learn better when extraneous words, pictures, and sounds are excluded.Modality principle: Students learn better from animation and narration than from animation and on-screen text.Redundancy principle: Students learn better from an animation and narration than from animation, narration, and on-screen text.Signaling principle: Better transfer of knowledge occurs when narrations are signaled.Pretraining principle: Better transfer of knowledge occurs when students already know the names and characteristics of essential components.Pacing principle: Better transfer occurs when the pace of presentation is controlled by the learner, rather than by the program.Individual differences principle: Design effects are stronger for low-knowledge learners than for high-knowledge learners.