Explore chapters and articles related to this topic
How Total Quality Management and Lean Six Sigma Drove Need for Supply Chain Integration
Published in Erick C. Jones, Supply Chain Engineering and Logistics Handbook, 2020
The timing of the revenues and costs also plays a very important role. Sometimes the revenues projected to be obtained might not be obtained because the funds were not available in the right time frame. The precision and detail of the project-planning phase play a very important role in the success of the project. The costs associated with each project are obtained based on historical data, quotes, standard rates, or similar activities performed previously. Estimates of project revenues are described based on four types of measurements: Budget—the plan of the total costs and cash inflows expressed in dollar amounts for the project. The plan also includes timing of the revenues and costs and a cost benefit analysisForecast—the predicted total revenues and costs, adjusting to include the actual information at the point of completion of projectActual—revenues and costs that have actually occurred for the projectVariance—the difference between the budgeted and actual revenues and costs. A positive variance shows the project will be favorable, while a negative variance shows that a loss will be incurred.
Execution, Reporting, and Analysis
Published in Park Foreman, Vulnerability Management, 2019
One of the most common tools for monitoring the statistical variance in a process is a control chart. This chart in a manufacturing process, for example, will show the number of defects over time for a part. For the purposes of a VM application, we will look at the number of vulnerabilities for critical hosts over time. There are two lines that show the “control limits” for vulnerability discovery. These are called the upper control limit (UCL) and the lower control limit (LCL). Anything outside of these limits shows that things are not in “statistical control.” This means that there is a source of variation in our results or performance that is unanticipated. This is known as special cause variation. More commonly, when there is a pattern of variation within the upper and lower control limits, this is called common cause variation. This represents the level of variation inherent in any process. And, what often happens is that people look at short-term charts, see a blip, react, and try to fix a problem that in fact is not a problem but part of the normal variability of the process. Only once it is outside the control limits, that is, special cause, should it be addressed.
General introduction
Published in Adedeji B. Badiru, Handbook of Industrial and Systems Engineering, 2013
Variability describes the differences in an attribute observed between members of a single class of entities. For example, given a group of people (the class of entities), their height (an attribute of the class) will differ between each individual. Variance measures the absolute variability, and the standard deviation is the square root of the variance. Oftentimes, it is more useful to employ the CV, which defines relative variation as the standard deviation divided by the mean. If we denote c as the CV, σ as the standard deviation, and t as the mean, then c=σ/t. When the standard deviation is large relative to the mean, then there is significant variation. In queuing theory, instead of the CV, most approximations make use of the squared coefficient of variation (SCV), which is c2=σ2/t2.
Design of variance control charts with estimated parameters: A head to head comparison between two perspectives
Published in Journal of Quality Technology, 2022
Martin G. C. Sarmiento, Felipe S. Jardim, Subhabrata Chakraborti, Eugenio K. Epprecht
Woodall and Montgomery (1999, 381) stated that “Most of the research on SPC methods has been focused on monitoring the process mean (or mean vector). Process variability is just as, or more, important.” A small shift in the process variance may have a meaningful effect on the performance of the control chart for the mean. Indeed, before the control chart for the mean can be constructed and effectively used, the process variability must be in control and a good estimate of the variance must be available. Furthermore, if the process variability increases, the number of non-conforming units may seriously increase. On the other hand, if the process variability decreases, the likelihood that all of the produced units meet their required specifications is high, and then the process capability could improve. Interesting applications are presented in the literature that show the importance of analyzing or monitoring the process variance (or standard deviation) on its own, as well as in the context of monitoring the process mean. See, for example, Tietjen and Johnson (1979), Sanders, Leitnaker, and Sanders (1994) and Franklin and Mukherjee (1999) with real datasets, and examples in Montgomery (2013, 264–267). Thus monitoring variability is one of the most important aspects of statistical process monitoring and control. To this end, the and charts are popular. However, when the underlying process variance (or the standard deviation is unknown, designing these charts is more involved. In this case, first (or ) is typically estimated from a Phase I (reference) dataset, consisting of independent subgroups (samples) of size in what is called a Phase I analysis (see Chakraborti, Graham, and Human 2009; and Jones-Farmer et al. 2014). The estimates are then used to calculate the prospective (Phase II) control limits, which involves finding the “control limit factors” (also called charting constants). The control limit factors are computed for given values of and and a specified nominal IC performance, typically in terms of the in-control average run length ( Thus, finding the control limit factors is an important part of designing the control chart when parameters are estimated.