Explore chapters and articles related to this topic
Dealing with Attributes
Published in David J. Smith, Sam Samuel, Basic Statistical Techniques for Medical and Other Professionals, 2021
where e is a number (often used in mathematics) whose value is approximately 2.7. It is not necessary to remember the value of e, or the terms of the Poisson expansion because, as is the case with the Normal, F and t Distributions, tables are available. In the case of the Poisson Distribution these tables are often expressed in the form of a set of curves which give the probability of n or less items, having the attribute in question, plotted against m for different values of n. A little caution is necessary since some tables and curves give the probabilities of n or more occurrences whilst others give those of n or less and others the probability of exactly n. Appendix 6 gives curves of the probabilities of n or less occurrences over a range of values of m from 0.1 to 20 and for values of n from 0 to 20.
Using Meta-Analysis to Plan Further Research
Published in Christopher H. Schmid, Theo Stijnen, Ian R. White, Handbook of Meta-Analysis, 2020
Claire Rothery, Susan Griffin, Hendrik Koffijberg, Karl Claxton
Standard results of meta-analysis provide a relative measure of effect (e.g., odds ratio) for an intervention compared with another, expressed in terms of its central value and associated uncertainty (confidence or credible interval) (Sutton and Abrams, 2001; Whitehead, 2002). When this uncertainty is combined with information about baseline risk and number of patients facing the uncertain treatment choice (e.g., expected incidence in the population whose treatment choice can be informed by the decision), the absolute effect of the uncertainty on health outcomes is obtained. Value of information analysis determines an estimate of the potential health benefits that could be gained if this uncertainty about treatment choice were resolved completely. Although further research cannot entirely eliminate uncertainty, it reduces the associated consequences by rendering them increasingly unlikely. For this reason, the estimate of the expected consequences of uncertainty represents an expected upper bound on the potential value of further research.
A Synthetic Overview
Published in Trevor G. Bond, Zi Yan, Moritz Heene, Applying the Rasch Model, 2020
Trevor G. Bond, Zi Yan, Moritz Heene
It is at this point that we turn to distinguishing between the long-accepted True Score Theory (TST) and modern Item Response Theory (IRT). The traditional research practices we’ve discussed throughout the book, that is, those based on the Stevens principles, are deeply rooted in TST (also called Classical Test Theory, CTT). CTT/TST has as its basis the model, X = T + E. The value X is the actual observed score (N correct) of the examinee on the test. T, the true score, is a hypothetical (unknown) value of the true ability of that examinee. It can be thought of as the examinee’s hypothetical average score calculated if the examinee were to take a theoretically infinite number of repetitions of the same test without any learning effect and under identical conditions. The value E represents the error. The model assumes that the value of T is constant and unknowable and that changes in the observed values of X are due to the error, E. As errors occur at random and are not related to T, or to each other, it follows that T and E can never be known.
The systemic immune-inflammation index was non-linear associated with all-cause mortality in individuals with nonalcoholic fatty liver disease
Published in Annals of Medicine, 2023
Enfa Zhao, Yiping Cheng, Chunxiao Yu, Huijie Li, Xiude Fan
After conducting a sensitivity analysis and eliminating those who died in the first 2 years, the results remained similar (Supplementary Table 1). In the fully adjusted Cox regression model, when individuals in the Q1 were used as a reference, those in the highest quartiles presented with a higher risk of death (aHR = 1.68, 95% CI = 1.32–2.14; p < 0.001). When treating log2-SII as a continuous variable, the results persisted in a fully adjusted model (aHR = 1.32; 95% CI: 1.18–1.48; p < 0.0001). Furthermore, we handled missing data with multiple imputations. After adjusting for multiple variables, the fundamental result was unchanged (aHR = 1.81; 95% CI: 1.48–2.21; p < 0.0001; Supplemental Figure 1). Moreover, to evaluate the influence of unmeasured confounding, using HRs for all-cause deaths, E-values (with their lower 95% confidence intervals) were calculated (Supplemental Figure 2). Unmeasured variables were related to both log2-SII and all-cause mortality by HRs of 2.45-fold; weaker confounding did not change these associations. As a result, the E-value and sensitivity analyses above confirmed the robustness of the findings.
A novel hybrid approach for feature selection enhancement: COVID-19 case study
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2023
Hela Limam, Oumaima Hasni, Ines Ben Alaya
After the filter steps introduced previously, we present the last step of the approach which consists of applying BFE. The input is all the features reduced by the filter method. The results of the filter phase are embedded into the initial state of the wrapper, allowing the algorithm to pick up where the filters left off with the search. The process has direct parameterization, making the search more flexible and focused. The solution found in the wrapper stage is guaranteed to be better or at least as good as a solution compiled based only on feature ranking. The algorithm is based on the calculation of p-value. p-value is a statistical measure that helps to determine whether our hypothesis is correct. It helps determine the significance of the results. the null hypothesis that feature X is probabilistically independent of the outcome variable (i.e., redundant) given a set of variables S. It is denoted by H0.
Bias-corrected estimators for proportion of true null hypotheses: application of adaptive FDR-controlling in segmented failure data
Published in Journal of Applied Statistics, 2022
Aniket Biswas, Gaurangadeb Chattopadhyay, Aditya Chatterjee
Instead of taking V degenerated at some fixed λ, assume 3), we get 9), we use e to denote expectation of non-null p-value: 9), we get e are to be estimated. p-values: p-values under the alternative, e can be estimated imitating the approach of estimating 3] and computation of t-tests therein. Since each 2.2, a strongly consistent estimator for e is