Explore chapters and articles related to this topic
Retention and Decay
Published in Robert R. Hoffman, Paul Ward, Paul J. Feltovich, Lia DiBello, Stephen M. Fiore, Andrews Dee H., Accelerated Expertise, 2013
Robert R. Hoffman, Paul Ward, Paul J. Feltovich, Lia DiBello, Stephen M. Fiore, Andrews Dee H.
Different retention measures can yield different degrees of apparent retention (Farr, 1987). Many choices have to be made in designing experiments, and one consequence is that different studies use different criteria (Farr, 1987). “Proficiency” or “mastery” at a skill has been defined as one errorless trial. Learning continues beyond the point where accuracy is perfect, and, more importantly, accuracy asymptotes rapidly in many tasks, leading to a potentially false conclusion that the material has been mastered … [and] because learning continues past error-free trials, overlearning may really just be a higher level of skill acquisition.(Arthur et al., 1998; p. 91; see also Regian & Schneider, 1990)
Personalized Refresher Training Based on a Model of Competency Acquisition and Decay
Published in Vincent G. Duffy, Advances in Applied Human Modeling and Simulation, 2012
W. Lewis Johnson, Alicia Sagae
Part of what makes language attrition difficult to account for in curriculum design is that its effects may depend upon each learner's previous learning behaviors and individual characteristics. For example, overlearning, i.e., continuing to practice a skill after it has been learned, can facilitate subsequent retention of the skill (Driskell et al., 1992). A learner's metalinguistic and metacognitive skills can affect how the learner mentally organizes, learns, and retains linguistic knowledge (Reilly, 1988). Over time these individual differences in learning behaviors can increase the differences between learners in terms of what they know and retain, making it difficult to design and apply a one-size-fits-all refresher training solution.
Memory and Training
Published in Christopher D. Wickens, Justin G. Hollands, Simon. Banbury, Raja. Parasuraman, Engineering Psychology and Human Performance, 2015
Christopher D. Wickens, Justin G. Hollands, Simon. Banbury, Raja. Parasuraman
We described above, the importance of overlearning (beyond error-free performance) in moving a task toward automaticity. Of course, in training for skills that are subsequently used on a daily basis (like driving or word processing), such overlearning will occur in the subsequent performance. But because learning skills related to emergency response procedures, for example, will not receive this same level of on-the-job training, their retention will greatly benefit from overlearning. (Logan & Klapp, 1991).
An extended challenge-based framework for practice design in sports coaching
Published in Journal of Sports Sciences, 2022
Nicola J Hodges, Keith R Lohse
There are three main subcomponent goals of maintenance practice; to reinforce key skills and strengths; to motivate and instil encouragement; and to develop a degree of automaticity of certain skills. Reinforcing key skills is a valuable part of practice related to experimental work on the concept of “overlearning” and hyperstabilization of memories (e.g., Rohrer et al., 2005; Shibata et al., 2017). Similarly, doing maintenance practice to increase automaticity is desirable because it allows performance to be achieved with low attentional demands (e.g., Beilock et al., 2002; Gray, 2004; Leavitt, 1979). Resources can then be allocated to dynamic and unpredictable environmental cues, which demand attentional resources (e.g., monitoring the play, opponents, the softness of the snow, etc.; all aspects important for transfer). In order to help maintain and reinforce such behaviours, the coach and athlete would be working on creating scenarios where there is more opportunity to practice fundamental, developed skills. This maintenance and reinforcement could be also encouraged through feedback and video, where successful plays/attempts/routines are shown.
Flipping the precalculus classroom
Published in International Journal of Mathematical Education in Science and Technology, 2019
The literature on interleaved practice in mathematics is somewhat sparse. Mayfield and Chase [18] taught five basic algebra rules to students in three groups, one using interleaved practice, one using simple review, and one using simple review with extra practice. They found that ‘cumulative participants significantly outperformed the other participants on accuracy of application and problem-solving skills…’ (p. 116). An experiment by Rohrer et al. [19] randomly divided eight classes into two groups of four. Each group got the same practice problems over a 9-week period, but one got the problems blocked by type, and the other got them interleaved. They found that the students with interleaved practice outperformed those with blocked practice, and that the effect was both large and statistically significant. Rohrer and Taylor [20] compared the effects of interleaved practice with that of ‘overlearning,’ that is, doing many problems of the same type after initial mastery. They found that while interleaved practice led to increased performance, overlearning did not. An experiment by Taylor and Rohrer [21] compared interleaved practice with blocked practice where the spacing between practice problems was held constant. They found that performance on the practice problems was worse for students doing interleaved practice, but that on a summary assessment, interleaved practice decreased the rate of discrimination errors, that is errors of using the wrong technique for the problem. I am unaware of any studies of the effects of interleaved practice on final grades or on performance in subsequent classes. Clearly, more research on this topic is necessary.
Assessment of tropical cyclone disaster loss in Guangdong Province based on combined model
Published in Geomatics, Natural Hazards and Risk, 2018
Shihong Chen, Danling Tang, Xiaoqing Liu, Hu Chunhua
However, limited by training samples, random selection on training and testing sets, PSO is not convincing for its unstable testing results with high contingency. In order to deal with that, cross-validation (CV) is used currently. The basic principle of CV is to divide data samples into K groups, and to make every subset, respectively, a testing set. Meanwhile, it makes the remaining K−1 subsets of samples as training sets. Then there will be K SVR models. Based on them, precision rate or fitness is calculated for SVR models. By doing so, it can effectively avoid over-learning or under-learning, and the testing results will be more convincing.