Explore chapters and articles related to this topic
Data Analytics for COVID-19
Published in Chhabi Rani Panigrahi, Bibudhendu Pati, Mamata Rath, Rajkumar Buyya, Computational Modeling and Data Analysis in COVID-19 Research, 2021
Survival analysis is a branch of statistics which is used to determine the expected duration of time until an event occurs using various different algorithms. These events include but are not limited to the death of a person, customer churn and relations, and fault prediction in machines. In the field of COVID-19, or in general medical research, a widely used method is the Kaplan–Meier Curve. It can be used to measure the probability of patients who live for a certain amount of time after treatment. Figure 12.6 illustrates the Kaplan–Meier estimator for the survival function S(t). This returns the probability that life is longer than t. In this equation, ti is the time when at least one event happened, di is the number of events (e.g., deaths) that occurred at time ti, and ni is the individuals who are known to have survived till time ti. This chapter will present the parameterized survival probabilities of COVID-19 patients with respect to their ages. The parameters taken here are the clinical parameters which specify the pre-existing patient health conditions.
Analyzing Toxicity Data Using Statistical Models for Time-To-Death: An Introduction
Published in Michael C. Newman, Alan W. McIntosh, Metal Ecotoxicology, 2020
Philip M. Dixon, Michael C. Newman
These techniques can be divided into two groups. The first are the techniques mandated by regulation for use in routine toxicity testing. Standard bioassay procedure for a short-term, dose-response experiment is to expose animals for 96 h, count the number of death, and calculate LC50’s and their 95% confidence intervals1 (Am. Public Health Assoc., pp. 641-645). The focus in these studies is the routine toxicological evaluation of a new chemical or material of unknown constituents. Appropriate techniques are fast, easily performed, and not sensitive to violations of statistical assumptions. However, we find a common tendency for toxicological researchers to uncritically select these routine toxicity testing protocols in their research efforts. This chapter will serve to introduce researchers to some statistical techniques that provide considerably more abundant and precise information than do the standard techniques with only a small amount of additional effort. These techniques, called survival analysis, failure-time analysis, or life data analysis, are widely used in medical and engineering research.24
Integrated analysis system for elevator optimization maintenance using ontology processing and text mining
Published in Stein Haugen, Anne Barros, Coen van Gulijk, Trond Kongsvik, Jan Erik Vinnem, Safety and Reliability – Safe Societies in a Changing World, 2018
M. Nagasaka, M. Sato, E. Kinoshita
This section describes failure analysis in our system for diagnosing component failure, identifying equipment attributes that affect failures, and predicting failures. Survival analysis is generally known as a statistical modeling method for irregularities, commonly used in medicine, reliability engineering, and other fields. From the survival model of a component, we can calculate the probability of survival at a future time or after some number of uses following installation. Constructing a survival model requires “right-censored data,” in which conditions are specified when the components are replaced. We used survival analysis to construct survival models of elevator components from maintenance records in order to apply those models to prediction of component failure. The maintenance records were composed of equipment master data, troubleshooting data, and regular operation data. However, to specify the conditions under which components were replaced, it was necessary to use handwritten reports of troubleshooting data, which used a variety of expressions to some extent.
Outlier detection and online monitoring of event sequences arising in customer service processes with unknown event-types
Published in Quality Engineering, 2021
Akash Deep, Shiyu Zhou, Dharmaraj Veeramani, Seamus Wedge, Chris Hardin
Within the survival analysis literature, Cox PH regression model (Cox 1972) is popular owing to its semi-parametric nature and ability to include predictors. Various residuals have been proposed for the Cox PH model, such as, Cox-Snell residuals (Cox and Snell 1968), martingale residual, deviance based residual (Therneau, Grambsch, and Fleming 1990) and so on. These residuals have been constructed with specific motivation, Cox-Snell residuals test the fit of cumulative hazard, martingale residuals compare the observed and expected number of events, and so on. In the same spirit, Nardi and Schemper (1999) proposed two residuals – (a) normal deviate residuals, and (b) logistic residuals which test for the fitted survival probability against median survival probability. They show that these two residuals are superior to the other mentioned residuals in terms of outlier screening. However, these residuals have been developed for the Cox PH model with a terminal event, that is, they check residuals of only one event-time, and hence, none of them can be directly applied to the case in hand. Another approach within survival analysis is the joint modeling of the recurrent and terminal events (such as Ye, Kalbfleisch, and Schaubel 2007; Zhangsheng and Liu 2011). However, to the best of our knowledge, monitoring of such joint models is not available.
Red-light running behavior of delivery-service E-cyclists based on survival analysis
Published in Traffic Injury Prevention, 2020
In survival analysis, it is assumed that the hazard ratio of the data is time-independent when using the Cox model. Therefore, the log curve method of the negative log survival function (LML) is used to test whether the data satisfies the proportional hazard hypothesis. The following equation can be deduced by the Cox model when the data are time-independent (Engstrand et al. 2018): The results indicate that the subgroup curves of each potential influencing factor are parallel to each other, as shown in Figure C1. Additionally, the statistical value p is 0.621 (> 0.05). Therefore, both graphical results and statistical results indicate that all proposed factors satisfy the proportional hazard hypothesis. The Cox proportional hazard model can be used to explore the effect of influencing factors.
Spare parts demand forecasting: a review on bootstrapping methods
Published in International Journal of Production Research, 2019
M. Hasni, M.S. Aguir, M.Z. Babai, Z. Jemai
The second bootstrap variant that belongs to this class is the bootstrap for censored (incomplete) data. Efron (1981) estimates the distributions emanating from populations where some of the data values are not fully observable because of censoring. For example, in survival analysis, a researcher disposes of marginal information: while he knows that the event ‘will’ occur, he ignores its temporal circumstances. A feature of such data is that observations are arranged into two sequences, namely: observed (Ti) and censoring values (Ci) which are both assumed to be independent for each . The underlying initial data sample is then the set of pairs , where and . The Efron (1981)’s estimate of the empirical bootstrap distribution of may be obtained by pasting B bootstrap samples , drawn randomly and successively with replacement from . It may be then used to produce the non-parametric maximum likelihood estimator for Yi. For example, in survival analysis, Yi are lifetimes and their non-parametric maximum likelihood estimator is the Kaplan–Meier estimator.