Explore chapters and articles related to this topic
Education and professional development
Published in Michael Kidd, Cynthia Haq, Jan De Maeseneer, Jeffrey Markuns, Hernan Montenegro, Waris Qidwai, Igor Svab, Wim Van Lerberghe, Tiago Villanueva, Charles Boelen, Cynthia Haq, Vincent Hunt, Marc Rivo, Edward Shahady, Margaret Chan, The Contribution of Family Medicine to Improving Health Systems, 2020
Michael Kidd, Cynthia Haq, Jan De Maeseneer, Jeffrey Markuns, Hernan Montenegro, Waris Qidwai, Igor Svab, Wim Van Lerberghe, Tiago Villanueva, Charles Boelen, Cynthia Haq, Vincent Hunt, Marc Rivo, Edward Shahady, Margaret Chan
Beyond goals and objectives and a greater focus on complex and practical skills, competency-based training has perhaps had its biggest impact on the methods of evaluation used in medical education. Traditionally, multiple-choice questions have been used to assess medical knowledge, and global rating scales completed by instructors have been a popular method for clinical skills evaluation. These methods are limited in their ability to accurately assess competency in a variety of other skill areas. Other techniques are increasingly being used to evaluate competency in the workplace, providing a more comprehensive assessment of overall competency. These methods include checklists, objective structured clinical exams, simulations and models, 360-degree assessments, and portfolios. Such improved and more comprehensive evaluations now also allow for more robust evaluation of training programs themselves, resulting in continuously improving medical education.
Planning the Initial Version
Published in Lucy Jane Miller, Developing Norm-Referenced Standardized Tests, 2020
An additional decision facing the test developer relates to response format. Multiple choice and true/false responses are well known. Another frequently used format is the Likert scale21 in which the subject is asked to respond on a scale of 1–5, or 1–3. Examples include ranking a behavior from “observed all of the time” to “observed none of the time” or “most liked by child” to “least liked by child.” The format provides a systematic method for item responses, regardless of content.
Validity I: Logical/Face and content
Published in Claudio Violato, Assessing Competence in Medicine and Other Health Professions, 2018
In classroom and standardized educational testing, multiple-choice tests also suffer from face validity problems. Many people believe that multiple-choice items only measure rote knowledge, even though the items can be constructed to measure higher level cognitive processes. This general belief about multiple-choice items can influence reaction and orientation toward these types of items and undermine rapport, credibility, and general classroom climate. It is common to hear students say that, “I just can’t take multiple-choice tests.” When using these tests, you should be aware that they have validity limitations.
Development and evaluation of a pre-clerkship spiral curriculum: data from three medical school classes
Published in Medical Education Online, 2023
Anthony J. Maltagliati, Joshua H. Paree, Kadian L. McIntosh, Kevin F. Moynahan, Todd W. Vanderah
Future directions for this work include statistical analysis using Pearson correlation to gauge the relationship of each learner’s scores on Spiral Curriculum questions with graded examinations and USMLE Board examinations to determine if the Spiral Curriculum may be a useful predictor for performance on high-stakes block and board examinations. Additionally, analysis to determine the mean and standard deviation of each session as well item analysis for each multiple-choice question’s difficulty (percentage correct) and discrimination (correlation of responses to individual items with overall test score) will highlight opportunities to revise and improve the multiple-choice questions themselves and/or identify gaps in the way the material is taught by faculty. Repeating the survey for the Class of 2023 and beyond, who will have had the Spiral Curriculum administered virtually due to the COVID-19 pandemic, will provide additional data to adequately power the analyses as well as provide valuable insight and context for how the Spiral Curriculum was received in a remote learning setting. Finally, our group intends to weigh the feasibility of a Spiral Curriculum for the clerkship year(s) of medical school with the intent of revisiting and integrating high-yield material across core clerkships and preparing medical students for USMLE Step 2.
Scoping Review of Critical Thinking Literature in Healthcare Education
Published in Occupational Therapy In Health Care, 2023
Christine Berg, Rachel Philipp, Steven D. Taff
The variance and lack of clarity in defining and teaching critical thinking manifests similarly in the instruments and strategies used to measure the construct. Using multiple choice question format critical thinking assessment is problematic because multiple choice tests focus on cognitive capacity and rote knowledge of features rather than the analysis and inference inherent in critical thinking (Halpern, 1998). Such assessment scores predict critical thinking component skill performance but not application in the complex, real-life situations where critical thinking typically occurs. In contrast, open-ended questions, such as those used in the Ennis-Weir Critical Thinking Essay Test or Watson-Glaser Critical Thinking Appraisal are preferable as more accurate measures of critical thinking-in-action (Ennis, 1993).
Gap between willingness and behavior in the vaccination against influenza, pneumonia, and herpes zoster among Chinese aged 50–69 years
Published in Expert Review of Vaccines, 2021
Xinyue Lu, Jia Lu, Liping Zhang, Kewen Mei, Baichu Guan, Yihan Lu
The survey on the knowledge included following questions: whether you know the disease (influenza, pneumonia, and herpes zoster); what are primary symptoms, causes and transmission routes of the diseases; whether it is easily contagious in contact with patients; whether the elderly are susceptible to the diseases; whether it may recur after recovery from infection; whether the diseases can be prevented; whether you know the vaccines; whether vaccination can prevent the diseases; what are targeted population of vaccination and vaccination schedules. We scored the answers to these questions as follows: 3 points for correct answers, 0 point for incorrect answers, and 1 point for those who answered ‘not sure’. For multiple-choice questions, 1 point was given for each correct answer. Then, scores from the above questions were added together to get the total score, stratified by knowledge of those vaccines and preventable diseases.