Explore chapters and articles related to this topic
Cluster Analysis
Published in M. Venkataswamy Reddy, Statistical Methods in Psychiatry Research and SPSS, 2019
Evaluation study is made for assessing the effectiveness of program implemented or for assessing the impact of developmental projects on the development of project area. Evaluation is determination of the results attained by some activity (whether a program, a drop or a therapy or an approach) designed to accomplish some valued goal or objective. Evaluation research is thus, directed to assess or appraise the quality and quantity of an activities and its performance, and to specify its attributes and conditions required for its success. It is also concerned with change over time. As evaluation research asks about the kind of change the program views as desirable, the means by which the change is to be brought about, and the signs according to which such change can be recognized.
Palliative care education: establishing the evidence base
Published in Lorna Foyle, Janis Hostad, Delivering Cancer and Palliative Care Education, 2018
Palliative care education is delivered by a range of providers in a variety of settings, from higher education institutions and specialist palliative care services within hospital and community settings to uni- and multi-professional audiences. Education provision may include higher education accredited undergraduate and postgraduate modules and programmes, as well as non-accredited study days and short courses. Kreuger’s (1994) stark statement quoted above serves as a reminder to healthcare professionals and education providers that they are accountable for the resources they use, with a requirement to justify their actions and demonstrate an evidence base for their care and service provision. This chapter aims to identify the need for evaluation of palliative care education, and to discuss how evaluation of palliative care education has developed in meeting the demands of stakeholders for evidence-based practice. The discussion will initially explore the context of education provision delivered by higher education institutions and specialist palliative care services. Subsequently, the theory of evaluation research will be discussed. The extent to which the evaluation of palliative care education has been developed will be explored alongside issues for consideration when planning and undertaking evaluation research in this field.
Evaluating Introductions and Literature Reviews
Published in Fred Pyrczak, Maria Tcherni-Buzzeo, Evaluating Research in Academic Journals, 2018
Fred Pyrczak, Maria Tcherni-Buzzeo
Another common type of study – again, mostly non-theoretical – evaluates the effectiveness of a policy or program (evaluation research). For example, researchers are wondering whether boot camps reduce juvenile delinquency compared to a traditional community service approach. Thus, the researchers secure the judge’s agreement to randomly assign half of the youth adjudicated for minor offenses to boot camps and the other half to community service. Then the researchers compare the rates of recidivism between the two groups of juveniles a year later. Evaluation research is covered in Appendix B: A Special Case of Program or Policy Evaluation.
Searching for active ingredients in rehabilitation: applying the taxonomy of behaviour change techniques to a conversation therapy for aphasia
Published in Disability and Rehabilitation, 2021
Fiona Johnson, Suzanne Beeke, Wendy Best
The traditional focus of evaluation research across disciplines is to define and report on outcomes of intervention. However, the high scientific standards applied to outcome reporting are rarely extended to the reporting of intervention content. Consequently, the components of intervention that may be responsible for producing change are often under-reported and poorly defined [1–3]. It is argued that the poor specification and characterisation of intervention content risks undermining the credibility and evidence base for rehabilitation [2,3]. Even where intervention content is detailed, a lack of agreed terminology means that essentially similar processes may be named differently from study to study, whilst, in contrast, generic descriptions such as “feedback” mask significant variation in the procedures being used [4,5]. Under-reporting and poor specification of intervention content pose a challenge for the accurate implementation of evidence-based interventions in clinical contexts, the replication of interventions’ effects, and the useful comparison and accumulation of evidence in systematic reviews [4,6,7]. Finally, they act as a barrier to analysing which components of intervention are most involved in creating change, and examining how these “active ingredients” work.
Evaluation of the National CLAS Standards: Tips and Resources
Published in Journal of Gerontological Social Work, 2021
The toolkit’s best feature is that it is aligned with fourteen standards of the National CLAS Standards. It is straightforward in assisting the reader to identify evaluation research questions, choose appropriate measures, collect data, share findings, and make changes to each short and long-term goal. It also includes field examples from the various HCOs and local organizations serving the LEP population to promote the readers’ understanding. Named as “Implementation in action,” each section is about organizational changes, staff diversity, the healthcare interpreter certificate program, the remote video voice medical interpretation project, and the organization’s efforts in data collection, which are the real issues one can meet in the fields in many cases. However, each standard’s short and long-term goals are somewhat overlapping because to have individual yet distinctive goals per each standard at the time of each evaluation can be challenging.
PYDSportNET: A knowledge translation project bridging gaps between research and practice in youth sport
Published in Journal of Sport Psychology in Action, 2018
Nicholas L. Holt, Martin Camiré, Katherine A. Tamminen, Kurtis Pankow, Shannon R. Pynn, Leisha Strachan, Dany J. MacDonald, Jessica Fraser-Thomas
We have yet to adequately monitor and evaluate the exchange and use of knowledge. Evaluation research is a key part of attempting to use social media for behavior change (Carpenter & Amaravadi, 2016). The number of people who view or interact with something on social media may give an indication of reach, but it is insufficient for assessing influence. With regard to our life skills infographics, for example, it would be important not only to understand who reads the information (e.g., coaches) and to whom they further distributed the information (e.g., other coaches, athletes), but also whether they used the information to alter their practices, and the effects of any changes to practice. Likewise, it would also be important to understand how the reach and frequency of exposure influence uptake, as both have been shown to positively influence the impact of health mass media campaigns (Noar, 2006). While it is difficult to establish how many messages or how much reach is necessary to reach “redundancy” of messaging, it has been shown that effective mass media campaigns in health range from 69% (Thombs & Hamilton, 2002) to 94% (Kaiser Family Foundation, 2004) exposure (in terms of estimated percentage of target audience reached). Graham et al.'s (2006) KTA framework will offer a useful approach for studying the use and effects of knowledge products on action in the future.