Explore chapters and articles related to this topic
Force-System Resultants and Equilibrium
Published in Richard C. Dorf, The Engineering Handbook, 2018
The most common charts used for variables are the X‾ and R charts. The charts are used as a pair for a given quality characteristic. In order to construct control charts for variables, the following steps may be followed: Define the quality characteristic that is of interest. Control charts for variables deal with only one quality characteristic; therefore, if multiple properties of the product of the process are to be monitored, multiple charts should be constructed. 2. Determine the sample (also called the subgroup) size. When using control charts, individual measurements or observations are not plotted, but, rather, sample averages are utilized. One major reason is the nature of the statistics and their underlying assumptions. Normal statistics, as the term implies, assumes a normal distribution of the observations. Although many phenomena may be normally distributed, this is not true of all distributions. A major statistical theory called the central limit theorem states that the distribution of sample averages will tend toward normality as the sample size increases, regardless of the shape of the parent population. Therefore, plotting sample averages ensures a reasonable normal distribution so that the underlying assumption of normality of the applied statistics is met. The sample size (two or larger) is a function of cost and other considerations, such as ease of measurement, whether the test is destructive, and the required sensitivity of the control charts. As the sample size increases, the standard deviation decreases; therefore, the control limits will become tighter and more sensitive to process variation.For each sample calculate the sample average, X‾, and the sample range. For each sample, record any unusual settings (e.g., new operator, problem with raw material) that may cause an out-ofcontrol condition.After about 20 to 30 subgroups have been collected, calculateX‾-=∑X‾g;R‾=∑Rg
An improved control-limit-based principal component analysis method for condition monitoring of marine turbine generators
Published in Journal of Marine Engineering & Technology, 2020
Kun Yang, Biao Hu, Reza Malekian, Zhixiong Li
However, the research studies on this issue are very limited in the literature. The multi-parameter statistical theory in modern statistics has provided a powerful tool to obtain the trend of a system by studying the change of multiply variables (Kuhn 2007; Zhang et al. 2010). Xu et al. (2016) described an integrated online condition monitoring method for the lubricating oil of a steam turbine so that operators can have a better understanding of the status of the lubricating oil to avoid the faults caused by polluted or failing lubricating oil in steam turbines. Peng et al. (2005) used wear debris analysis and vibration analysis to identify wear in a worm gearbox under various controlled experimental conditions. Loutas et al. (2011) conducted the data fusion of combining the measurement technologies of vibration, acoustic emission and oil debris for the condition monitoring of rotating machinery.
Statistics = Analytics?
Published in Quality Engineering, 2020
Box (1990, p. 252), noting the shift away from the math heavy focus of statistics in the past, stated “It seems a pity that while we statisticians have an opportunity to rate as first-class scientists we should settle for the rather dreary role of second-class mathematician”. One of our former university professors was fond of quoting this more concisely as “Why do we aspire to be 2nd rate mathematicians when we can be 1st rate scientists?” With the increase in computing power and data availability that continues today, mathematics is becoming less important for statisticians while savviness with a computer is become more important. Lenth (2014, p. 14) in a Youden address described this as the underlying tectonic plates shifting when he stated “Partly due to technology, the underpinnings of the theory of statistics have also evolved in the past few decades. Just as the Pacific tectonic plate has gradually shifted over the hot spot that created the Hawaiian Islands, statistical theory has shifted so that the underlying hot spot has changed largely from mathematics to computer science.” This shift in the underlying tools for the use statistical methods does not change the fundamental focus of statistics on data and the science of extracting information from that data.
Pipeline signal feature extraction with improved VMD and multi-feature fusion
Published in Systems Science & Control Engineering, 2020
Yina Zhou, Yong Zhang, Dandi Yang, Jingyi Lu, Hongli Dong, Gongfa Li
SVM, a machine learning algorithm, is based on structural risk minimization criteria proposed by Burges (1998), which is a data analysis method based on statistical theory. The main idea is to establish a hyperplane to maximize the sum of the distance from the positive class data to the hyperplane and the distance from the anti-class data to the hyperplane. That is, to find the maximum interval between the data to the Hyperplane. SVM has a good advantage in solving small and medium-sized sample problems, nonlinear problems, and high-dimensional data problems (Wang et al., 2019). In this paper, the selected characteristic parameters are used as the basis of leakage judgment, and SVM is used to classify and identify the pipeline signals collected under different working conditions.