Explore chapters and articles related to this topic
Building the Story of Scientific Evidence for Digital Therapeutics: Trials, Meta-Analysis, and Real-World Data
Published in Oleksandr Sverdlov, Joris van Dam, Digital Therapeutics, 2023
Derek Richards, Angel Enrique, Jorge Palacios, Nora Eilert
On a statistical level, the advantage of meta-analyses lies in the increase in statistical power that results from the pooling of individual study outcomes, meaning they can provide more reliable estimates of the true effects of treatment than individual studies. On a theoretical level, the systematic inclusion of all studies assessing a specific treatment in a particular population allows for the generalization of this treatment's effectiveness to future implementations (Cuijpers, 2016). As such, high-quality meta-analytic evidence may guide policy and service level decision-making and inform the implementation of DTx interventions within a service or even across a health system (Guyatt et al., 2008). Table 9.6 outlines the key aspects of two systematic reviews and meta-analyses of DTx for different conditions.
Strategies to Handle Missing Data in Meta-Analysis
Published in Ding-Geng (Din) Chen, Karl E. Peace, Applied Meta-Analysis with R and Stata, 2021
Meta-analysis is the process of integration of the results from multiple studies with the goal of estimating the true effect of an intervention on a particular effect size (ES) of interest. Meta-analysis and systematic review are very similar in terms of narrative summary; however, meta-analyst usually pools summary estimates numerically from individual published studies which are often insufficiently powered to draw definitive results of a particular research question. In individual published studies, complete data are rarely available, especially when a requirement of the study is to collect information on a large number of individuals or large number of variables or simply because the individual study did not collect the outcome and/or moderator which is required to pool the summary estimates.
Hierarchical Models and Longitudinal Data
Published in Gary L. Rosner, Purushottam W. Laud, Wesley O. Johnson, Bayesian Thinking in Biostatistics, 2021
Gary L. Rosner, Purushottam W. Laud, Wesley O. Johnson
In the next section, we motivate and, by example, illustrate the concept of hierarchical modeling. Next, we discuss applications of hierarchical structures to regression modeling. In the literature, such models are termed “mixed models,” because of the addition of latent (random) effects to standard regression models. We then discuss the important topic of longitudinal and correlated data modeling. In addition, we use the same hierarchical modeling structure to handle meta-analysis. Meta-analysis infers a common effect across a finite number of studies that were designed separately but with the same purpose in mind. For example, one may posit a general model in which each study assessed the effectiveness of a particular drug for treating a particular disease. The essence of a meta-analysis is to combine information from similar studies to make an overall inference about the effectiveness of the treatment.
Identification and validation of core genes in tumor-educated platelets for human gastrointestinal tumor diagnosis using network-based transcriptomic analysis
Published in Platelets, 2023
Yuhong Jiang, Jun He, Xiaobo Wang, Chao Liu, Weihan Zhou, Dekun Liu, Zhushu Guo, Kuijie Liu
Compared to directly intersecting RNA-seq data from different data repositories to obtain DEGs, combination of multiple datasets can increase sample size, therefore increasing the reliability and generalizability of results. “Meta-analysis” is the process of combining results from independent but related studies using statistical techniques.31 Using transcriptome data from multiple studies, a meta-analysis can reveal robust molecular signatures, improve reproducibility, or discover more reliable biomarkers. NetworkAnalyst (http://www.networkanalyst.ca) is a comprehensive on-line tool that allows users to perform common and complex gene expression meta-analyses, which include data integration, statistical meta-analysis, PPI network, and enrichment analysis, via a standard web browser.32 It has been widely used in microarray data and RNA-seq data meta-analysis.33–36 Notably, as mature human platelets are anuclear, what we are referring to as gene “expression” or differential “expression” here is actually the “splicing” process of pre-mRNAs in platelets.20,37
A generalized BLUE approach for combining location and scale information in a meta-analysis
Published in Journal of Applied Statistics, 2022
Xin Yang, Alan D. Hutson, Dongliang Wang
All aforementioned established methods are only for estimating the mean and the standard deviation for a single study. The ultimate goal of a meta-analysis is to combine information from multiple studies to arrive a global estimate of the treatment effect. Therefore, a random-effects model is often assumed in order to further get an estimation of global mean difference or global standardized mean difference as a form of weighted average of individual means and standard deviations [13,15]. Studies have been fully conducted to compare the performance of different weighting approaches, see, for example, Marín-Martínez and Sanchez-Meca [18], showing that weighting by the inverse variance proposed by Hedges and Vevea [13] yields more accurate results in a meta-analysis when the effect size is the standardized mean difference. However, they do not consider scenarios where individual sample means and standard deviations are not reported and transformations from inconsistent quantities to individual sample means and standard deviations are needed. Inaccurate estimates of individual standard deviations lead to inappropriate weights in the inverse-variance approach, and therefore may yield biased overall effect sizes and biased confidence intervals that lead to misleading conclusions.
The application of advanced imaging techniques in glaucoma
Published in Expert Review of Ophthalmology, 2022
Su Ling Young, Nikhil Jain, Andrew J Tatham
Based on diffractive particle movement diffraction, OCT angiography tracks the movement of red blood cells in the retinal microvasculature. Reduction in optic disc perfusion measured by OCTA in glaucoma was first identified by Liu and colleagues in 2015 [55]. A systematic review and meta-analysis by Miguel and colleagues was the first to evaluate studies examining the diagnostic ability of OCTA parameters for glaucoma detection [56]. Based on a pooled analysis of 18 studies, including 888 eyes with glaucoma and 475 controls, there was a statistically significant lower mean peripapillary vessel density (PVD) in eyes with glaucoma (57.53%, 95%CI 52.60 to 62.46%) compared with controls (65.47%, 95%CI 59.82 to 71.11%, P < 0.001). Eyes with glaucoma also had significantly lower whole optic nerve image vessel density, inside-disc vessel density, and parafoveal vessel density. A strength of the review was that subgroup analysis was performed to account for heterogeneity of the meta-analysis. However, all studies were double-gate case-control design, susceptible to selection bias, potentially boosting performance of the metrics under investigation. A recommendation from this review was that future OCTA studies should report signal strength index, to reduce the potential influence of image quality on vessel density measurements.