Explore chapters and articles related to this topic
Analysis of a Machine Learning Algorithm to Predict Wine Quality
Published in Roshani Raut, Salah-ddine Krit, Prasenjit Chatterjee, Machine Vision for Industry 4.0, 2022
The performance of the classification models for a given set of test data is drawn by using confusion matrix. It can only be determined if the true values for test data are known. In information retrieval and classification in machine learning, precision is also called positive predictive value that is the fraction of relevant instances among the retrieved instances, while recall is also known as sensitivity that is the fraction of relevant instances that were retrieved. Both precision and recall are therefore based on relevance. In statistical hypothesis testing, a type-I error is the rejection of a true null hypothesis also known as a “false-positive” (FP) finding or conclusion; for example, an innocent person is convicted, while a type-II error is the non-rejection of a false null hypothesis also known as a “false-negative” (FN) finding or conclusion; for example, a guilty person is not convicted. The different terms used are described next:
Introduction
Published in Graham V. Weinberg, Radar Detection Theory of Sliding Window Processes, 2017
The probability of false alarm is the probability that the hypothesis H0 is rejected when it is actually true. In statistical hypothesis testing, this is known as a Type I error, or the size of the statistical test. In terms of radar detection, too many false alarms can result in a tracking algorithm missing a true target. Hence this is a critical problem, and the Pfa needs to be minimised. In practice, one usually sets this to an acceptable level, such as 10−4 or smaller. In mathematical terms the Pfa is given by Pfa=P(Z0>τg(Z1,Z2,…,ZN)|H0), $$ Pfa = P(Z_{0}> \tau g(Z_{1} ,Z_{2} , \ldots ,Z_{N} )|H_{0} ), $$
Validation of Ground Motion Simulations for Historical Events using Skewed Bridges
Published in Journal of Earthquake Engineering, 2020
Carmine Galasso, Peyman Kaviani, Alexandra Tsioulou, Farzin Zareian
Statistical hypothesis testing is a method of statistical inference used for testing scientific models and assumptions. In particular, parametric hypothesis tests are proposed here to quantitatively assess the statistical significance of differences in terms of the proposed EDPs (for a given bridge) due to recorded and simulated ground motions.
Development of a nonlinear rutting model for asphalt concrete based on Weibull parameters
Published in International Journal of Pavement Engineering, 2019
A. S. M. Asifur Rahman, Matias M. Mendez Larrain, Rafiqul A. Tarefder
The correlation coefficient r for a sample of a population can be interpreted by its value which ranges between −1 and +1. The r-value close to +1 or −1 infers that the independent variable (material attribute) is strongly related to the dependent variables, positively or negatively. In this current study, the dependent variables are Weibull β and η. On the other hand, the p-value is defined as the probability of obtaining a result equal to or ‘more extreme’ than what was actually observed, when the null hypothesis is true. In frequentist inference, the p-value is widely used in statistical hypothesis testing, specifically in null hypothesis significance testing. Statistical p-value indicates the probability of null hypothesis to be true for a definite population. If the value is very small, the null hypothesis, which actually infers no relation between the variables, is rejected and the alternative hypothesis is accepted to be true. This infers that the variables are significantly related. On the other hand, if the p-value is large enough, we fail to reject the null hypothesis, which infers that, there is no relationship between the tested variables. For example, A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, therefore, we reject the null hypothesis. On the other hand, a large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis. The significance level, also denoted as alpha or α, is the probability of rejecting the null hypothesis when it is true. For example, a significance level of 0.05 indicates a 5% risk of concluding that a difference exists when there is no actual difference. Selection of a significance level or α-value depends on the choice of the analyst and the problem we are dealing with. The typical α-values are 0.01, 0.05, or 0.10. However, the analyst or a modeller always can increase this value to incorporate variables that are proved to be less significant by the tests, but are known to be significant from other sources or studies. It should be noted that for both of the linear dependency and significance tests, the independent variables other than which is considered for testing are assumed to be equal. However, for our present case, the ‘all else equal’ condition cannot be kept because of the simultaneous variation of material attributes among all the AC samples. Therefore, spurious results are possible while evaluating linear dependencies between independent and dependent variables.