Explore chapters and articles related to this topic
Meta-Analysis of Diagnostic Tests
Published in Christopher H. Schmid, Theo Stijnen, Ian R. White, Handbook of Meta-Analysis, 2020
Yulun Liu, Xiaoye Ma, Yong Chen, Theo Stijnen, Haitao Chu
and ϕ(·;θj) is the logit normal distribution indexed by θj (j = 1,2). Note that only one-dimensional integrals are involved in the pseudolikelihood. Hence, the approximation errors can be reduced. In addition, the non-convergence or non-positive definite covariance matrix problem is alleviated since there is no correlation parameter involved in the pseudolikelihood. More importantly, in contrast to the bivariate normality assumption made by the standard likelihood method, the pseudolikelihood relies on the marginal normality of logit sensitivity and logit specificity. Hence, the pseudolikelihood based inference may be more robust than the standard likelihood inference to misspecification of the joint distribution assumption. The parameters are estimated by maximizing the pseudolikelihood, robust standard errors are calculated with the sandwich method, and confidence intervals are obtained by Wald’s method.
Inverse Probability Weighting in Nested Case-Control Studies
Published in Ørnulf Borgan, Norman E. Breslow, Nilanjan Chatterjee, Mitchell H. Gail, Alastair Scott, Christopher J. Wild, Handbook of Statistical Methods for Case-Control Studies, 2018
where is the set of all sampled subjects, both controls and cases, at risk at time . Alternatively is sometimes referred to as a pseudo-likelihood.
Modeling Binary Outcome Data
Published in Mohamed M. Shoukri, Analysis of Correlated Data with SAS and R, 2018
Once a modeling strategy has been chosen, there is also the issue of which method or methods can be used to fit the model. Because of the complexity of specifying a complete joint distribution for the set of correlated responses and the associated computational burdens, maximum likelihood estimation is not always feasible. However, pseudo-likelihood and different types of approximations to the desired likelihoods have been used. We shall elaborate on this issue further in subsequent chapters.
Regression analysis of case-cohort studies in the presence of dependent interval censoring
Published in Journal of Applied Statistics, 2021
Mingyue Du, Qingning Zhou, Shishun Zhao, Jianguo Sun
Many authors have discussed the analysis of case-cohort studies but most of the existing methods are for right-censored failure time data. For example, some of the early work on this was given by Prentice [27] and Self and Prentice [32], who proposed some pseudolikelihood approaches based on the modification of the commonly used partial likelihood method under the proportional hazards model. By following them, Chen and Lo [3] proposed an estimating equation approach that yields more efficient estimators than the pseudolikelihood estimator proposed in Prentice [27], and Chen [2] developed an estimating equation approach that applies to a class of cohort sampling designs, including the case-cohort design with the key estimating function constructed by a sample reuse method via local averaging. Also Marti and Chavance [25] and Keogh and White [18] proposed some multiple imputation methods and in particular, the latter method extended the former by considering more complex imputation models that include time and interaction or nonlinear terms. In addition, Kang and Cai [17] and Kim et al. [19] developed weighted estimating equation approaches for case-cohort studies with multiple disease outcomes, where the latter method improved the efficiency upon the former by utilizing more information in constructing the weights.
Bayesian bandwidth estimation and semi-metric selection for a functional partial linear model with unknown error density
Published in Journal of Applied Statistics, 2021
Given that errors are unknown in practice, we approximate them by residuals obtained from the functional principal component and functional NW estimators of the conditional mean. Given bandwidths h and b, the kernel likelihood of ith residual obtained from the estimated regression function. This likelihood is not proper since some terms are left out. Instead, the likelihood is a pseudo-likelihood. The pseudo-likelihood is a likelihood function associated with a family of probability distributions, which does not necessarily contain the true function [35]. As a consequence, the resulting Bayesian estimators, while consistent, may have an inaccurate posterior variance, and subsequent credible sets constructed by this posterior may not be asymptotically valid (see also [16,48]).
Joint model for bivariate zero-inflated recurrent event data with terminal events
Published in Journal of Applied Statistics, 2021
There are diverse approaches to consider the dependency between recurrent event and terminal event. Among them, frailty models or shared random effect models are applied to specify such dependency (Huang and Wolfe [7], Liu et al. [10] and Ye et al. [14]). Ghosh and Lin [5,6] considered marginal models for a testing problem and regression model in a context of cumulative mean function. Same topics have been discussed under a multivariate recurrent event data. Chen and Cook [2] proposed a testing procedure for a multivariate recurrent event data with a terminal event. Zhu et al. [17] extended Cai and Schaubel's approach to the regression model of multivariate recurrent event data in order to consider both the dependency among different types of recurrent events and the association between recurrent event and terminal event. Zhao et al. [15] proposed a pairwise pseudolikelihood approach under unspecified distribution of a frailty effect.