Explore chapters and articles related to this topic
Human Health Studies
Published in Barry L. Johnson, Impact of Hazardous Waste on Human Health, 2020
Causal inference is the term used for the process of determining whether observed associations are likely to be causal; the use of guidelines and professional judgment is required. As the National Research Council noted, “The world of epidemiology, as that of any human science, seldom permits elegant inferences to be drawn about causation” (NRC, 1991b). A fundamental dilemma for environmental epidemiologists derives from the fact that the statistical correlation of variables (e.g., proximity to waste sites and elevated risk of birth defects) does not necessarily indicate any causal relationship among the variables, even where tests of statistical significant may be met. As the NRC observed, “Mere coincident occurrence of variables says nothing about their essential connection” (ibid.). Professional assessment must be made of a study’s findings and inferences, and ultimately causation is inferred only after a consideration of all relevant science and epidemiology.
Causal Concepts and Graphical Models
Published in Marloes Maathuis, Mathias Drton, Steffen Lauritzen, Martin Wainwright, Handbook of Graphical Models, 2018
We start by clarifying the notation and terminology used in this chapter as well as the role of graphical models, recalling how they relate to conditional independence structures. In Section 1.2, a brief tour through different causal notations and frameworks introduces common key concepts as well as differences between formal approaches. Two ways of combining these with graphical models are addressed and compared in Section 1.3. A central question of causal inference is whether a desired causal effect is identified from observational data. Early ideas of how to obtain graph-based answers are discussed in Section 1.4.
Human monitoring systems for health, fitness and performance augmentation
Published in Adedeji B. Badiru, Cassie B. Barlow, Defense Innovation Handbook, 2018
Mark M. Derriso, Kimberly Bigelow, Christine Schubert Kabban, Ed Downs, Amanda Delaney
Therefore, this analytical method, structural equation modeling, allows estimation of both direct and indirect effects to investigate the processes underlying the relationships between the factors which are known as constructs [19]. In addition, effects and relations established in the structural equation model are casually related, that is, causal inference is now possible. The fitting of the structural equation model includes estimating the coefficients of the pathways between each of the constructs as well as computations and tests to modify the pathways. Model fit is assessed through fit indices [19–24].
A causal inference method for canal safety anomaly detection based on structural causal model and GBDT
Published in LHB, 2023
Hairui Li, Xuemei Liu, Xianfeng Huai, Xiaolu Chen
Since the need for prediction is prevalent in big data applications, prediction is often used as the goal of model estimation in regression modelling. However, research in recent years has found that machine learning models can achieve good results by learning pseudo-correlation, but do not generalise well in real-world settings, making these problems more appropriate as causal inference tasks. The causal inference task is somewhat similar to the prediction task in that both are based on some evidence that gives an estimate of a variable. The difference between prediction and causal inference lies in the different modelling objectives. The prediction task is to learn the conditional probability distribution from historical data by simply considering the correlation between features x and variables y, which is used to estimate the value of y that fits the pattern of historical data given x. The causal inference task, on the other hand, is to learn the intervention distribution among variables from historical data under certain causality assumptions, which is usually used to estimate the effect of a change in x on y.
Two-stage approach to causality analysis-based quality problem solving for discrete manufacturing systems
Published in Journal of Engineering Design, 2023
Haonan Wang, Yuming Xu, Tao Peng, Reuben Seyram Komla Agbozo, Kaizhou Xu, Weipeng Liu, Renzhong Tang
Causality analysis theory was originally proposed by Fisher (1970) and Granger (1969), further developed by Judea Pearl (2000), and has been constantly maturing over the last two decades. Generally, causality analysis has two parts, causal discovery and causal inference. The former, also known as causal structure learning, is used to learn the causal relationship between variables, where algorithms can be classified as constraint-based, score-based, and hybrid methods (Zhou and Chen 2022; Glymour, Zhang, and Spirtes 2019). Causal inference, also refers to causal effect estimation, uses the observed data to estimate the causal effect of one variable on the other, where Structure causal model and Rubin causal model are typically used (Yao et al. 2021). The estimated causal effect can eliminate the bias caused by confounding factors (Li, Ding, and Mealli 2023; Moraffah et al. 2021), providing a credible causal relationship to support solution creation. However, in manufacturing systems, there usually exist confounding factors between process parameters and product quality. This leads to a significantly low efficiency in on-site quality problem solving. By incorporating causality analysis, the bias brought by confounding factors could be eliminated (Mooij et al. 2016). The preceding discussions indicate that incorporating causality analysis is crucial in creating solutions (Li and Shi 2007).
On the stress potential of an organisational climate of innovation: a survey study in Germany
Published in Behaviour & Information Technology, 2022
Fourth, we collected our data in a cross-sectional online survey. Though we controlled for common method bias, our data collection approach nonetheless has weaknesses that need to be addressed in future research. For example, from a strict methodological viewpoint, causal inference is only possible based on longitudinal data or experimental designs that imply deliberate manipulation of the independent variable. Such study designs would be needed to investigate the proposed evolutionary process of innovations as outlined by Coccia and colleagues (e.g. Coccia 2017, 2019a, 2019b) or the influence of actual innovative work behaviours on organisational performance (e.g. Shanker et al. 2017). It would also be worthwhile to investigate the potential of other types of data collection in addition to self-reports to shed more light on the effects and coping mechanisms related to perceived uncertainty and unreliability. As shown in a systematic review (Riedl 2013), ICT-related stressors such as unreliability of systems may negatively affect a number of physiological parameters, including elevations of stress hormones (e.g. adrenaline, noradrenaline, or cortisol) or changed patterns of autonomic nervous system activity (e.g. increased heart rate, reduced heart rate variability, or muscle tension).