Explore chapters and articles related to this topic
Quantitative Methods for Analyzing Experimental Studies in Patient Ergonomics Research
Published in Richard J. Holden, Rupa S. Valdez, The Patient Factor, 2021
Kapil Chalil Madathil, Joel S. Greenstein
The experimental design, structure, and measurement type predominantly determine the statistical techniques that can be used to analyze the data collected from patient ergonomics studies. A sound experimental design is required to enable patient ergonomics researchers to gather interpretable comparisons of the effects of manipulated variables. At the very least, a good experimental design consists of identified independent variables and their respective states that will be manipulated or held constant, associated dependent variables that measure the outcomes of the experiment, characteristics and the number of participants to be used, and a scheme for the replication of unique states of the manipulated variables. There are two common methods of collecting data from experimental studies: the between-subjects, or independent, design and the within-subject, or repeated measures, design. The former involves manipulation of the independent variable using different groups of participants, and the latter involves manipulation of the independent variable with the same group. Our case study is an example of a within-subject experimental design. The independent variable in our study is the type of FHx data collection interface. It is tested at two levels—conventional and conversational.
Preliminaries
Published in Anastasios A. Tsiatis, Marie Davidian, Shannon T. Holloway, Eric B. Laber, Dynamic Treatment Regimes, 2019
Anastasios A. Tsiatis, Marie Davidian, Shannon T. Holloway, Eric B. Laber
In an observational study, individuals are not assigned treatment via randomization according to some experimental design, but rather receive treatment according to physician discretion or their own choice. In some situations, it may be unethical to conduct a randomized study; for example, it would be unacceptable to undertake a study randomizing participants to smoking or not. Thus, it is of interest to consider whether or not it is possible to estimate the average causal treatment effect δ*from the data from such a study and under what conditions.
Handling, Maintenance, and Disposal of Animals Containing Radioactivity
Published in Howard J. Glenn, Lelio G. Colombetti, Biologic Applications of Radiotracers, 2019
Experimental design comes within the purview of the investigator. Even so, the investigator should discuss it with the director of the animal facility and with the Radiological Safety Officer. It is assumed that the investigator has a license to perform the study. Without one, and before proceeding, he must submit the necessary information to obtain a license or an authorization. Even though a person has a license, it is imperative that he have the necessary experience to do the study proposed. Investigator experience is of prime concern if studies are to be done adequately and safely. Not only the investigator himself, his entire investigational staff must have adequate training. An investigator must not change experimental design during the experiment. This complicates the control of radiation and radioactivity and creates an extremely difficult problem.
Music and musical elements in the treatment of childhood speech sound disorders: A systematic review of the literature
Published in International Journal of Speech-Language Pathology, 2023
Mirjam van Tellingen, Joost Hurkmans, Hayo Terband, Roel Jonkers, Ben Maassen
Description of the setting varied across studies, with three studies describing the setting sufficiently, two giving a minimal description and in three studies settings described insufficiently. Four studies gave a description of the therapist or interventionist conducting the intervention. Procedures in baseline and intervention phases were sufficiently described by Gross et al. (2010), Lagasse (2012) and Martikainen and Korpilahti (2011). None of the four case studies had a sufficient description of the baseline phase, but Beathard and Krout (2008) and Helfrich-Miller (1994) described the intervention phase in sufficient detail. All the experimental design studies described the dependent variable, but none operationally defined the dependent variable, described the data collection on these target behaviours AND gave a reason for targeting these behaviours.
Variational Bayesian inference for association over phylogenetic trees for microorganisms
Published in Journal of Applied Statistics, 2022
Xiaojuan Hao, Kent M. Eskridge, Dong Wang
Just as linear mixed models can incorporate multiple random effects to account for dependence for continuous random variables, the Bayesian model described here can also be extended to incorporate multiple phylogenetic trees. For example, experiments performed at several different sites can lead to multiple phylogenetic trees. This can be modeled by assuming a common prior distribution for the S variable at each site. This way, the association relationship can be modeled while explicitly accounting for the site effects (see Section 6 of the appendix). Other problems involving extensive hierarchical relationship can also be modeled with a similar approach as the variational Bayesian algorithm described here provides a potent tool to overcome computational difficulties regarding the tree structure. An example will be the inference over the directed acyclic graph of Gene Ontology [GO, 15]. The external variable, X, can also be defined in various manners to accommodate complex experimental designs.
Merging Yoga and Occupational Therapy for Parkinson’s Disease: A Feasibility and Pilot Program
Published in Occupational Therapy In Health Care, 2020
Laura A. Swink, Brett W. Fling, Julia L. Sharp, Christine A. Fruhauf, Karen E. Atler, Arlene A. Schmid
We completed a pilot and feasibility study, and employed a within-subjects quasi-experimental design. Feasibility was measured based on process, resources, management, and scientific basis outcomes throughout the program (Tickle-Degnen, 2013). Participants served as their own controls and outcomes were measured at three time points: baseline assessments (eight weeks prior to the intervention), pre-assessments (just before the intervention), and post-assessments (immediately following the 8-week intervention). Control period differences were considered as the difference between baseline and pre-assessments. Intervention period differences were considered as the difference between pre and post-assessments. All procedures were approved by the University’s Institutional Review Board prior to any recruitment or data collection.