Explore chapters and articles related to this topic
Causal Mapping
Published in Penny W. Cloft, Michael N. Kennedy, Brian M. Kennedy, Success is Assured, 2018
Penny W. Cloft, Michael N. Kennedy, Brian M. Kennedy
Although developing a Problem Description is really the first step in the larger Problem-Solving Process, rather than part of the Causal Analysis Process, we would be remiss in skipping this step here because The most common failure mode for Causal Analysis efforts is a poor Problem Description. Too small a scope results in not addressing the whole or sub-optimizing; too large a scope results in a slower problem-solving process or even complete stalls as the process becomes overwhelmed; and the wrong focus results in solving a symptom or a lower-priority problem than needed.A good Causal Analysis process should be robust in the face of a bad Problem Description. That is, it should help you see that your Problem Description is flawed and lead you to an improved Problem Description. So, the development of the Problem Description is, in a sense, part of the Causal Analysis Process.
System and Software Measurement Programs
Published in Ron S. Kenett, Emanuel R. Baker, Process Improvement and CMMI® for Systems and Software, 2010
Ron S. Kenett, Emanuel R. Baker
By implementing CAR, the project classifies the various types of problems and defects and counts the frequency of its occurrence. Although the process may be performing normally, and there are no special causes of variation, certain types of defects or errors may still be occurring more frequently than others within the “noise” of the process. Using tools like a Pareto chart, the project can determine if a given type or types of defect is or are occurring much more frequently than others. If the project determines that any type or types of defect are occurring at a much greater frequency, the project, with the participation of appropriate stakeholders, can then implement a causal analysis, using tools like the Ishikawa (fishbone cause and effect) diagram. Having identified the most likely cause or causes, these stakeholders can propose likely fixes. A quantitative estimate should be made as to the effectiveness of the proposed solution. If possible, an estimate of the expected mean, and upper and lower control limits for the fix should be made, and that information input into the appropriate PPM to estimate the effect on the ultimate result in terms of process performance or product quality. Clearly, if the change is going to have negligible impact, the change should not be made—that would not be cost effective.
The Ultimate Improvement Cycle
Published in ByBob Sproull, The Focus and Leverage Improvement Book, 2018
In Step 1b, we are performing a Value Stream Analysis (VSA) to determine both locations of waste and excess inventory. My recommendation in this step is to use a Current Reality Tree (to be explained in a later chapter) to identify areas of concern and to identify Undesirable Effects (UDEs) that will impede your performance efforts. In Step 1c, we recommend performing capability studies and implementing control charts where necessary. I also recommend performing a Pareto analysis, and then using tools like a Cause and Effect Diagram to perform a causal analysis. Causal Chains and Why-Why Diagrams are highly effective in this step, as we work to identify cause and effect relationships.
Understanding Failed Software Projects through Forensic Analysis
Published in Journal of Computer Information Systems, 2022
William H. Money, Stephen H. Kaisler, Stephen J. Cohen
We recognize the significant difficulties with the advanced techniques that can be used to ascribe causation to the failure of a software project. A recommended research approach would stress the use of converging results from multiple causal analysis methods such as: causal diagrams; flow-graphs (nodes representing project events, with points interconnected a set of linear algebraic equation); statistics; analysis among variables; and causation parameters.40
Big data for cyber physical systems in industry 4.0: a survey
Published in Enterprise Information Systems, 2019
In the effectiveness direction, many researchers proposed different new methods, such as odds ratio (Mosteller 1968), relative risk (Sistrom and Garvan 2004), likelihood ratio (Neyman and Pearson 1992), lift (Brin et al. 1997), leverage (Piateski and Frawley 1991), BCPNN (Bate et al. 1998), two-way support (Tew et al. 2014), added value (Kannan and Bhaskaran 2009), and putative causal dependency (Huynh et al. 2007). Different methods will highlight different patterns as the random noise has different impacts on different methods for different patterns. For example, Leverage highlights correlated patterns which occur frequently in the dataset, while BCPNN highlights correlated patterns which occur infrequently (Duan et al. 2014). Besides handling random noise, another direction is related to causal analysis (Krämer et al. 2013). The results from correlation analysis are useful for prediction. For example, if events A and B are correlated, we can expect a higher chance for A to happen if B occurs. However, such a correlation relationship is not very useful for intervention. In other words, making efforts to reduce the probability of B can not necessarily reduce the probability of A. For example, it is very important to detect confounding factors to highlight more causal relationship from correlation. A confounding factor is an event C that is associated with both event A and B (Pearl 2000). The seemingly positive correlation between A and B is spurious when considering the confounding factor C. For example, in healthcare, the drug Naltrexone is positively correlated with the disease pancreatitis, because Naltrexone is used to treat alcoholism and alcoholism often leads to pancreatitis. The confounding factor alcoholism is a distortion of the genuine correlation between Naltrexone and pancreatitis. Popular methods for detecting confounding factors include the Cochran-Mantel-Haenszel method (Cochran 1954), logistic regression model (Li et al. 2014), and partial correlation (Baba, Shibata, and Sibuya 2004). In addition, timestamps of events are also useful for causal analysis, because causes always happen earlier than their effect (Kleinberg and Mishra 2009). Correlation and causal analysis is very useful for machine failure monitoring and maintenance in Industry 4.0. For example, machines could have many different failure types which require different interventions and maintenance. However, among numerous signals generated by machines and events associated with machines, a certain signal or event is associated with one type of machine failure but not the other. Correlation and causal analysis can help to associated signals/events with failure types, which is useful for machine failure prediction and maintenance plan improvement. Zaki, Lesh, and Ogihara (2001) utilized the correlation analysis to prune out unpredictable and redundant patterns to improve machine failure prediction performance. Sammouri (2014) utilized correlation analysis to connect severe railway operation failures with sensor data for vehicle, rail, high-voltage lines, track geometry, and other railway infrastructures, which allows the constant and daily diagnosis of both vehicle components and railway infrastructure.