Explore chapters and articles related to this topic
External Control Using RWE and Historical Data in Clinical Development
Published in Harry Yang, Binbing Yu, Real-World Evidence in Drug Development and Evaluation, 2021
Qing Li, Guang Chen, Jianchang Lin, Andy Chi, Simon Davies
Because using RWD and historical data as external controls has many practical considerations, the Health Authority recommends a prospectively defined statistical analysis plan (SAP). The SAP should include the analysis population, definition of endpoints, deceptive objectives, testable hypothesis, and statistical methods. The integrity of the analysis also should be carefully executed. It is critical to prospectively plan the use of RWD/historical data and objectively design a study for the evaluation of the confirmatory trials. During the design stage, clinical outcome variables from RWD/historical data should not be analyzed to avoid data dredging. The design stage must be outcome free to mimic the prospective randomized clinical trials.
Statistical Analysis
Published in Abhaya Indrayan, Research Methods for Medical Graduates, 2019
All-round availability of computers and software has made it easier to re-analyze data after dropping some inconvenient observations. This misuse is called data dredging. Editors and reviewers would rarely be able to detect this because a finished report may not contain any such evidence.
The Big Picture
Published in David A. Katerndahl, Directing Research in Primary Care, 2018
Emanuel et al (2000) proposed seven characteristics for ethical research. The study must have value by potentially enhancing health or knowledge. The study must be rigorous, having scientific validity. Subject selection must also be fair, and the risk-benefit ratio must be favorable. There must be independent review of the study and subjects must receive informed consent. Finally, subjects must be treated with respect, including confidentiality, monitored progress, and the ability to withdraw. Additional ethical issues arise about treatments. Should comparison groups receive placebos or conventional treatment? Under what conditions can high-risk treatments be used? Ethical standards also arise in the analysis of results. In addition to “data-dredging”, ignoring issues such as statistical power and experiment-wise alpha when planning a study is unethical. Similarly, the use of sample sizes that are smaller or larger than necessary puts subjects at inappropriate risk because the study either has insufficient subjects to find a difference or it has more subjects than were needed to find that difference. Finally, unethical communication practices are possible. The presentation of results in graphs can be misleading. In addition, inappropriate authorship and multiple publication of results are unethical practices. It is the discipline that must set the standards for research.
To Slice or Perish
Published in Seminars in Ophthalmology, 2023
Jim Shenchu Xie, Mohammad Javed Ali
Salami publication has received harsh criticism for myriad reasons. Inappropriate data fragmentation encourages other forms of malpractice to accommodate multiple papers, including omission of key information (e.g., details of original dataset or similar studies), inappropriate extrapolation of results, data dredging, and data falsification.7,23 Salami slicing also decreases motivation to pursue large-scale, methodologically rigorous studies that confirm and expand on preliminary findings. Consequently, scientific progress is stalled, and the literature is polluted with the smallest publishable units of scant significance. Although inflation of publication output seems beneficial in the short term, a high publication count is meaningless if the overall value of accumulated work is deficient. Furthermore, research visibility and citation count may be decreased rather than enhanced by salami slicing as fragmented data tend to be published in lower impact factor venues and journals in non-related disciplines.14 Discovery of salami publication may damage the author’s reputation and negatively impact their future career, especially if duplicitous attempts were made to conceal this malpractice.
Using fuzzy-set qualitative comparative analysis to explore causal pathways to reduced bullying in a whole-school intervention in a randomized controlled trial
Published in Journal of School Violence, 2022
Emily Warren, G.J. Melendez-Torres, Chris Bonell
Generated using the Tosmana (Cronqvist, 2011) software, we assessed each model’s consistency and coverage in their respective truth tables. Rihoux and Ragin recommend that consistency scores should be >0.75 and coverage scores should be >0.85.(Rihoux & Ragin, 2008) We valued higher consistency over coverage because consistency is a better tool for showing whether the data supported our hypotheses. While low coverage may be a problem, it also indicates that other explanations outside the model may contribute to the outcome. This was expected because schools take diverse action to reduce bullying not restricted to those enabled by LT resources. When consistency or coverage were too low, new concepts suggested by our intervention theory of change and qualitative research, were added. To avoid data-dredging, we stopped adding conditions when no further measures that aligned with the hypotheses emerged directly from the qualitative findings. For example, in the first iteration of the overarching mechanism, consistency was high at 90% and coverage was moderate at 55%. Therefore, we added indicators for the learning of conflict resolution skills and the decreasing of conduct problems, which are both important for improving pro-social skills.(Goodman, 1997)
Oxyhemoglobin changes in the prefrontal cortex in response to cognitive tasks: a systematic review
Published in International Journal of Neuroscience, 2019
Leandro Viçosa Bonetti, Syed A. Hassan, Sin-Tung Lau, Luana T. Melo, Takako Tanaka, Kara K. Patterson, W. Darlene Reid
Quality assessment scores ranged between 50% and 75% (8 to 12 out of 16 points) with a mean of 65% (Table 2). Studies could have received a score of 0 or 1 for 14 of the 15 questions, or 0, 1 or 2 for question 5. Three articles were rated as high quality, four as moderate quality, while three were evaluated as low-quality [31]. All studies received points for reporting the hypothesis; clear description of study methodology; describing characteristics of subjects; clearly stated results; not deriving results from data dredging; using appropriate statistics; and using reliable and accurate outcome measures. On the other hand, none of the articles scored positively with respect to having study samples representative of the population of interest and blinding the subjects to the intervention or the assessors measuring the outcomes.