Explore chapters and articles related to this topic
Lifetime Data and Concepts
Published in Prabhanjan Narayanachar Tattar, H. J. Vaman, Survival Analysis, 2022
Prabhanjan Narayanachar Tattar, H. J. Vaman
The survival probability is widely known as the survival function, and is complement of . Note that .
Survival Analysis
Published in Marcello Pagano, Kimberlee Gauvreau, Heather Mattie, Principles of Biostatistics, 2022
Marcello Pagano, Kimberlee Gauvreau, Heather Mattie
A distribution of survival times can be characterized by a survival function, represented by . is defined as the probability that an individual survives beyond time t. Equivalently, for a given t, specifies the proportion of individuals who have not yet failed at that time. If T is a random variable representing survival time, then
Introduction
Published in Catherine Legrand, Advanced Survival Models, 2021
For people not familiar with event times data, it is interesting to note that given the intrinsic time component, we may actually distinguish an instantaneous and a cumulative quantification of the event times distribution [132]. Indeed, the hazard is obviously an instantaneous description of the data; it quantifies, at each time-point, the fraction of individuals that develops the event of interest at that time amongst the ones that are still at risk of an event at that time. It is therefore sometimes also referred to as a rate and is linked to the concept of incidence in epidemiology. The hazard function is also sometimes called the instantaneous death rate, the intensity function or the force of mortality. On the other hand, the (cumulative) distribution function (so the complement of the survival function) is a cumulative measure, based at time t on the fraction of individuals who have experienced the event up to that time amongst all those that were at risk at the beginning of the observation period. As a consequence, it can never decrease when the time span increases. Other common names for this cumulative distribution function are the cumulative incidence, the cumulative risk or the actuarial risk function.
Regression modelling of interval censored data based on the adaptive ridge procedure
Published in Journal of Applied Statistics, 2022
Olivier Bouaziz, Eva Lauridsen, Grégory Nuel
A direct method for deriving confidence intervals or statistical tests can therefore be based on the normal approximation of the model parameter after computing the Hessian matrix of the observed log-likelihood. However, since the calculation of the Hessian matrix is tedious under the piecewise constant hazard model, we prefer to use a likelihood ratio test approach. This approach and the explicit expression of the Hessian are detailed in the Supplementary Material. See also Ref. [28] for more details about the likelihood ratio test approach for constructing confidence intervals. Finally, note that bootstrap methods can also be implemented to derive confidence intervals. This technique is particularly relevant when the interest lies in the estimation of the survival function in a non-parametric or regression context. In order to derive the asymptotic distribution of such functional, one would need to use the delta-method which may result in complicated formula for the variance estimator. The bootstrap alternative avoids these technicalities.
Efficacy and safety of baloxavir marboxil versus neuraminidase inhibitors in the treatment of influenza virus infection in high-risk and uncomplicated patients – a Bayesian network meta-analysis
Published in Current Medical Research and Opinion, 2021
Vanessa Taieb, Hidetoshi Ikeoka, Piotr Wojciechowski, Katarzyna Jablonska, Samuel Aballea, Mark Hill, Nobuo Hirotsu
The analyses were conducted in Bayesian framework, using the Markov Chain Monte Carlo (MCMC) method, as outlined by the National Institute for Health and Care Excellence Decision Support Unit (NICE DSU) guidelines16. The analyses of efficacy outcomes were conducted in the influenza-infected population and the analyses of safety outcomes were conducted in the total population. For continuous outcomes, the mean change from the baseline for each treatment and associated standard errors (SE) were used as inputs. For binary outcomes, the number of patients experiencing the outcomes and the total numbers of patients by study arm were used. For time to event outcomes, the analysis was conducted assuming that the survival function for time to recovery outcomes followed an exponential distribution, the input of the analysis was the logarithm of the hazard rate [log(λ)] and associated SE.
Economic implications of adding a novel algorithm to optimize cardiac resynchronization therapy: rationale and design of economic analysis for the AdaptResponse trial
Published in Journal of Medical Economics, 2020
Gerasimos Filippatos, Xiaoxiao Lu, Stelios I. Tsintzos, Michael R. Gold, Wilfried Mullens, David Birnie, Ahmad S. Hersi, Kengo Kusano, Christophe Leclercq, Dedra H. Fagan, Bruce L. Wilkoff
Multiple parametric survival functions will be fitted to the AdaptResponse mortality data. The parametric functions tested will be exponential, Weibull, Gompertz, log-normal, log-logistic and gamma. The final choice of statistical model will be made using a combination of within study goodness of fit and long-term clinical plausibility. The survival time within the trial follow-up period will be estimated from Kaplan-Meier curves. It will be assumed that patients surviving throughout the clinical trial have an expected additional lifetime that will be determined by fitting a parametric survival function to the clinical-trial data, accounting for age, gender and NYHA class. A graphical comparison of the finally selected and fitted model and the trial data represented via a Kaplan-Meier plot will also be made.