Introduction
Dominic Upton in Introducing Psychology for Nurses and Healthcare Professionals, 2013
Out of this perspective come a number of terms and methods that will crop up during this text: Hypothetical constructs: These are not observable but can only be inferred from behaviour. For example, memory, intelligence and personality.Model: A metaphor, involving a single fundamental idea or image.Theory: Often the terms theory and model are used, incorrectly, interchangeably, but a model is a complex set of inter-related statements that attempt to explain certain observed phenomena.Hypothesis: A testable statement about the relationship between two or more variables, based on a theory or model.Variable: Anything that can vary and can be one of two kinds: an independent variable (IV), which the researcher manipulates to see if it affects the dependent variable (DV).
Statistical Analysis
Abhaya Indrayan in Research Methods for Medical Graduates, 2019
Antecedents that can affect the outcome are called independent variables in a regression setup. In the birthweight example described in the previous paragraph, the mother’s weight, father’s weight, and Hb level are independent variables. They can be manipulated in the sense that you can choose parents of different weights and of different Hb levels in your study to see how these variations affect the birthweight. In the diabetic retinopathy example, independent variables are duration of diabetes, nutrition level, regularity of treatment, and age. They can be quantitative or mixed. (If all of them are qualitative, and the dependent is quantitative, the situation reverses to ANOVA, discussed earlier.) Independent variables are known by several other names depending on the context: regressors, factors, determinants, explanatory variables. Use the term that looks most appropriate for the measurement in hand. This text uses these terms interchangeably depending on the context.
Research Methods
Deborah Fish Ragin in Health Psychology, 2017
Independent Versus Dependent Variables One advantage of experimental research designs is that the investigator can limit or control many of the variables to be studied. The variable that the investigator manipulates or controls is called the independent variable. In our example of the effects of exercise on stress, the independent variable is exercise. By carefully selecting the independent variables to include in a study, researchers can examine the effect of each variable on the study’s outcome. Experimental studies may have multiple independent variables. For simplicity, however, our example has just one independent variable, exercise. Note, however, when designing a study on the effects of exercise on stress, researchers must define the type or types of exercise they include in the study.
Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare
Published in Journal of Health Care Chaplaincy, 2018
Kevin J. Flannelly, Laura T. Flannelly, Katherine R. B. Jankowski
Based on Mill’s (1859) principles, an experimenter tries to control or hold constant all the variables that can affect the outcome (the dependent variable) of an experiment apart from the experimental manipulation (Keppel & Wickens, 2004), which is also called the independent variable, intervention, or treatment (see L. T. Flannelly et al., 2014a). The variables that the researcher wants to control are known as both extraneous variables, because they are extraneous to the purpose of the experiment, and confounding variables, because their effects are confounded with the effects of the independent variable if they are not properly controlled. The explicit concern is that “the operation of some extraneous variable causes the observed values of the dependent variable to inaccurately reflect the effect of the independent variable” (Cherulnik, 1983, p. 21). In other words, the observed effect of the experiment is not due to the independent variable, but to the extraneous variable. Thus, the failure to control extraneous variables undermines the ability of researchers to logically make the causal inference that the apparent effect of an experimental manipulation is, in fact, the result of the manipulation (i.e., the independent variable or intervention). Unfortunately, it is not very easy to control extraneous variables outside of a laboratory setting (Rubinson & Neutens, 1987).
Procedural Integrity Reporting in the Journal of Organizational Behavior Management (2000–2020)
Published in Journal of Organizational Behavior Management, 2022
Daniel Cymbal, David A. Wilder, Nelmar Cruz, Grant Ingraham, Mary Llinas, Ronald Clark, Marissa Kamlowsky
Procedural integrity, or treatment fidelity or procedural fidelity, refers to the extent to which the independent variable is implemented as described, and is also important for internal validity. Ideally, researchers collect formal data on the accuracy of implementation of the independent variable and report these data when publishing their research, similar to the way in which data on interobserver agreement on the dependent variable are reported. Of course, in some research (e.g., laboratory-based research) independent variables may be implemented mechanically or digitally. In these studies, the independent variable implementation is less subject to error, so reporting data on procedural integrity may be less important. However, most applied behavior analytic research is conducted in the “field” and interventions are typically delivered by the experimenter, therapist, parent, manager, or consultant, possibly making these independent variables more at-risk for implementation errors.
Effectiveness of video-based modelling to facilitate conversational turn taking of adolescents with autism spectrum disorder who use AAC
Published in Augmentative and Alternative Communication, 2018
Abirami Thirumanickam, Parimala Raghavendra, Julie M. McMillan, Willem van Steenbrugge
The essential feature of the multiple baseline design is the successive introduction of the intervention across the data series (participants, settings, behavior). This means that intervention is not introduced to all series simultaneously; instead intervention for subsequent data series is introduced following observable changes in the preceding data series. This is to control for the possibility that any changes in the dependent variable measures are linked to the independent variable and not potentially due to other uncontrolled events (e.g., age, practice, maturity) (Kazdin, 2011). In some cases, a longer lag between the introduction of the intervention and the observable change can lead to prolonged baselines for the subsequent series. Prolonged baselines have been found to reduce methodological rigour (Kazdin, 2011; Kratochwill & Levin, 2014) and might increase irrelevant and contesting behaviors in participants (Panyan, Boozer, & Morris, 1970). To counter-act the potential negative effects of prolonged baselines, a pre-determined intervention start point system was used to ensure the interventions were introduced in a successive pattern, in line with the requirements of the multiple baseline design. Due to the small number of participants (two) initially recruited, a second strand of the study was conducted to ensure adequate replication of effect. This second strand also involved two participants.
Related Knowledge Centers
- Data Mining
- Density
- Design of Experiments
- Experiment
- Confounding
- Design of Experiments
- Reliability Engineering
- Risk Factor
- Medical Statistics
- Feature
- Omitted-Variable Bias