Explore chapters and articles related to this topic
Automatic failure diagnosis for flow control valves
Published in C. Guedes Soares, T.A. Santos, Trends in Maritime Technology and Engineering Volume 1, 2022
E. Ruijs, X. Jiang, R.R. Negenborn, T. Park
The ANOVA (F-test) measures the degree of linear dependency between two random variables and can be used to test whether a feature is a significant predictor of the output. The test calculates an F-value by comparing the variability between and within the groups. Large F-value, emerging from a large distance between the means of the variables, corresponds to a good predictor for the output. The mathematical formulation is gathered from work by Elssied, Ibrahim, & Osman (2014). sj2=∑i=1Nj(xij−xˉ)2Nj−1gˉ=∑j=1JNjxˉjN
Using prototypes for product assessment
Published in Fuewen Frank Liou, Rapid Prototyping and Engineering Applications, 2019
The F-distribution is a right-skewed distribution used most commonly in ANOVA. The F-test or F-ratio is an overall test of the null hypothesis that group means on the dependent variable do not differ. The logic of the F-test is that the larger the ratio of between-groups variance (a measure of effect) to within-groups variance (a measure of noise), the less likely that the null hypothesis is true. Once the F-ratio is obtained, one can transfer it to various distributions, such as P-value (probability value) or confidence intervals, with the degrees of freedoms of the numerator and denominator. The P-value of a statistical hypothesis test is the probability of getting a value of the test statistic as extreme as or more extreme than that observed by chance alone, if the null hypothesis is true. It is equal to the significance level of the test for which one would only just reject the null hypothesis. The P-value is compared with the actual significance level of the test, and, if it is smaller, the result is significant. That is, if the null hypothesis were to be rejected at the 5% significance level, this would be reported as p < 0.05. There are tables available for the transformation of F-distribution to P-value. Such distributions are listed in Appendix A. The right tail area is given in the name of the table, for example, as a = 0.05. The F-distribution transformation is also available using Excel as discussed in Section 9.4.
Cares to Deal with Heat Input in Arc Welding: Applications and Modeling
Published in Jaykumar J. Vora, Vishvesh J. Badheka, Advances in Welding Technologies for Process Development, 2019
At this point, it is essential to justify using statistics, that the replications are not needed in the above and further experiments. According to Liskevych et al. (2013), when there are a small number of factors (input variables), and higher number of factor levels (6 levels, i.e., 5–112 mm), the degree of freedom of the experiment is high enough. A consistent significance of the tendency can be reached if the hidden variance is low. In an analysis of variance, an F-test is often used to determine whether any group of trials differs significantly from an expected value. If the calculated ratio is less than the table value (Prob, for a significance level of, for instance, 95%), the null hypothesis that the variance is not significantly different is accepted. It means that not only does the equation fit the data (measured by R2), but also the results do not differ from what was expected (Prob > F must be lower than 0.05).
Investigation on pH influence for the effective transportation of coal water slurry using experimental design
Published in International Journal of Coal Preparation and Utilization, 2022
Purushottam Karthik J, Raguraman C. M., Tara Sasanka C
Analysis of variance (also known as ANOVA) is a statistical method and the significance of the parameters influencing pH value was identified with its help. In Tables 9 and 10, F-values at 95% confidence level are: F(0.05, 2,2) = 19.00, significant at 95%. ANOVA was used to determine the relative importance of the operational parameters that influence pH value. The output values from the analysis of variance for pH values for both impellers are depicted in Tables 9 and 10. The F-test in ANOVA is the ratio of the operational parameter’s variance to the error variance, which determines if the parameter has a significant effect on the outcome. This was accomplished by comparing the parameter’s F-test value to the standard F table value (F0.05) at the 5% significant level.
An improved algorithm to predict the mechanical properties of nuclear grade 316 stainless steel under elevated-temperature liquid sodium
Published in Journal of Nuclear Science and Technology, 2021
Yaonan Dai, Xiaotao Zheng, Jiuyang Yu
To compare the goodness of fit of the two models, hypothesis testing is required, including T-test and F-test. T-test is used to compare means of two sample population, and F-test is used to test if the variances of two populations are equal. The values obtained from the T-test and the F-test are called p-values. If the p-values are greater than 0.05, the ANN prediction model has statistically satisfactory goodness of fit from the modelling point of view [36]. The results of the hypothesis test is shown in Table 5. Table 5 shows that none of the p-values are less than 0.05 and therefore, the BP-NN model and IRBF-NN model have statistically satisfactory goodness of fit from the modelling point of view. Therefore, from a modeling perspective, the ANN prediction model has a statistically satisfactory goodness of fit.
Performance Evaluation of Basic Flotation Kinetic Models Using Advanced Statistical Techniques
Published in International Journal of Coal Preparation and Utilization, 2019
Saroj Kumar Sahoo, Nikkam Suresh, Atul Kumar Varma
An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. It is most often used when comparing statistical models that have been fitted to a data set, in order to identify the model that best fits the population from which the data were sampled. Exact F-tests mainly arise when the models have been fitted to the data using least squares. In the statistical literature [13], a logical sensitivity test procedure is discussed. In this sensitivity analysis, the optimal value of each model parameter is changed (±10%, ±25%, ±50%) one by one, accordingly the new ŕi value is calculated for each data point and the sum of squares (SSQ) value is also calculated for the difference in calculated recovery and observed recovery values.