Explore chapters and articles related to this topic
Survival Trees
Published in Prabhanjan Narayanachar Tattar, H. J. Vaman, Survival Analysis, 2022
Prabhanjan Narayanachar Tattar, H. J. Vaman
The segment drawing in part A of the figure is our idea. However, algorithms do not have such a luxury since they do not have the ‘birds eye view’. Without going into the precise algorithm, the idea here is to have a peek at the bigger picture of decision trees, we will look at the classification tree obtained by the algorithm and inspect Part B of Figure 8.2. Here, the tree asks us to first check if . On moving to the right side if the condition does not hold, the second check is whether . If the criterion is satisfied, the observation is labeled as a red square, else as green circle. The reader can interpret the left part of the decision tree similarly. The C part of the figure are these rules implemented on the scatterplot. Of course, if we continue to partition the data, we will get higher and higher accuracy. The question is how does one carry out recursive partitioning?
Statistical Methods for Biomarker and Subgroup Evaluation in Oncology Trials
Published in Susan Halabi, Stefan Michiels, Textbook of Clinical Trials in Oncology, 2019
Ilya Lipkovich, Alex Dmitrienko, Bohdana Ratitch
It is well known that recursive partitioning procedures are notoriously unstable and a small perturbation in a trial’s dataset may lead to a substantial difference in terms of selection of splitting variables and associated cutoffs. The Adaptive SIDEScreen uses variable importance scores computed over a broad set of subgroups in order to stabilize the biomarker selection process and choose biomarkers with strong predictive properties. A further improvement on the Adaptive SIDEScreen method is the Stochastic SIDEScreen introduced in the article by Lipkovich et al. [40].
Glioblastoma
Published in Dongyou Liu, Tumors and Cancers, 2017
Recursive partitioning analysis categorizes patients into different risk groups based on tumor size and location, age at diagnosis, and KPS at presentation and treatment. The lowest risk group includes patients <40 years with tumor in the frontal lobe only. The intermediate-risk group comprises patients aged 40–65 years, KPS more than 70, and subtotal or total resection. The highest risk group includes patients >65 years, between 40 and 65 years with KPS less than 80, or biopsy only. Methylation of O6-methylguanine-DNA methyltransferase (MGMT), which is present in approximately half of glioblastoma patients over the age of 70, is associated with improved outcome in comparison to unmethylated MGMT and is considered a marker of better therapeutic response. Patients with glioblastoma (especially of the proneural subtype) containing CpG island methylator phenotype and IDH1 mutation as well as temozolomide-based chemoradiation have a significant OS advantage.
Using Decision Trees to Identify Salient Predictors of Cannabis-Related Outcomes
Published in Journal of Psychoactive Drugs, 2022
Frank J. Schwebel, Dylan K. Richards, Rory A. Pfund, Verlin W. Joseph, Matthew R. Pearson
Machine learning is a branch of quantitative methodology arising from computer science and artificial intelligence (Michalski, Carbonell, and Mitchell 2013). A subtype of machine learning, decision tree learning, is a promising tool for explaining the predictive value of multiple independent variables related to cannabis-related outcomes. Decision tree learning involves developing parsimonious predictive models and provides decision rules for predicting both categorical and continuous outcomes. The algorithm for decision tree learning finds the split of a predictor variable that best distinguishes between two distinct groups on an outcome. Following each split, the same algorithm using all possible predictor variables (including variables from the previous split) determines the next split. The algorithm is repeated until each terminal node contains a relatively homogeneous subsample. Decision tree models are well suited to handle high-dimensional data and can process a large number of predictor variables simultaneously (Strobl, Malley, and Tutz 2009). Recursive partitioning is a type of decision tree learning model useful for exploratory data analyses.
A tree of life? Multivariate logistic outcome-prediction in disorders of consciousness
Published in Brain Injury, 2020
Inga Steppacher, Peter Fuchs, Michael Kaps, Fridtjof W. Nussbeck, Johanna Kissler
Clearly, our results call for validation in additional samples. Unfortunately, we are not able to run a cross-validation within our sample due to sample size requirements. Nevertheless, we used the rpart-package (34) (method = “class”) to investigate if our results remain stable across different analytic strategies, namely recursive partitioning. Recursive partitioning strives to correctly classify members of the population by splitting it into sub-populations. Within this process, the first variable is identified which best splits the data into two groups. After that, the next variable is tested independently for each sub-group. The resulting models can therefore be presented as binary trees. Using all predictors simultaneously, we found that again the occurrence of a N400, and for those with a N400, age and the occurrence of a P300 could be identified as important nodes producing the best recovery rates (Figure 3).
Quantitative assessment of the activity of antituberculosis drugs and regimens
Published in Expert Review of Anti-infective Therapy, 2019
Maxwell T. Chirehwa, Gustavo E. Velásquez, Tawanda Gumbo, Helen McIlleron
CART is a nonparametric method that implements recursive partitioning to a data space, fitting a predictive model within each partition [19]. The recursive partitioning technique provides for exploration of the structure of a data set including outcome and predictors, and identification of easy to visualize decision rules for predicting a continuous (regression tree), time-to-event (survival tree) or categorical (classification tree) outcome. In the continuous outcome case, the objective is to predict the mean response within each partition. However, the use of regression trees to identify pharmacokinetic targets is limited mainly due to the availability and widespread use of other machine learning and PK/PD modeling methods described later. Time-to-event outcomes such as time-to-culture conversion are important markers for treatment response, but this type of data is often analyzed using classical statistical techniques such as the Cox proportional hazards model. However, the Cox proportional hazards model cannot be used to describe the nonlinear interactions between the predictors of a time-to-event treatment outcome without pre-specification. In spite of its potential advantages, survival tree analysis has not been applied to clinical data from tuberculosis patients, to our knowledge. More often, tuberculosis treatment outcomes are recorded as categorical variables (including time-to-event data converted to a binary outcome), and classification trees are used to predict responses of the categorical outcome.