Explore chapters and articles related to this topic
Simulation as a tool to study systems and enhance resilience
Published in Erik Hollnagel, Jeffrey Braithwaite, Robert L. Wears, Delivering Resilient Health Care, 2018
Ellen Deutsch, Terry Fairbanks, Mary Patterson
The design of this project includes clear examples of the four main activities of resilience engineering: monitoring, reacting, learning and anticipating (Hollnagel, 2011a). The methodology and analysis incorporated a Safety-II approach of reinforcing appropriate actions and resources (Hollnagel et al., 2015; Woods and Cook, 2006), making the margins and constraints of the system visible, and developing team behaviours that have the potential to improve the adaptive capacity of the team (Braithwaite et al., 2015). The concepts of margins, constraints and boundaries are based on the work of Cook and Rasmussen (Rasmussen, 1997; Cook and Rasmussen, 2005), which suggests dynamic trade-offs between pressures to optimise workload, productivity, and the boundary of safe performance.
Safety Organization and Risk Management
Published in Tom Kontogiannis, Stathis Malakis, Cognitive Engineering and Safety Organization in Air Traffic Management, 2017
Tom Kontogiannis, Stathis Malakis
The resilience approach emphasizes the need of organizations to develop a capability for dealing with unknowns. Being resilient requires that practitioners are able to improvise and adapt procedures to unfamiliar situations. There is some criticism that risk assessment methods are built using the rear-view mirror. They look at the past to generate warnings for the future. Unfortunately the world is not quite so linear; it evolves through alterations that change the assumptions of risk analysts. A case in point is the new aviation environment that will operate in different ways from the current environment. As a result, the history of failures and adverse events may not be so useful in predicting future patterns of operation in aviation. Hence, the need for risk assessment methods that look toward the future and make use of modern approaches of system thinking and complexity. In this sense, resilience engineering emphasizes the adaptive capacity of organizations to survive adverse events (e.g., making the system less tightly coupled, switching to new organizational structures, and providing more autonomy to practitioners).
Human Error, Interaction, and the Development of Safety-Critical Systems
Published in Guy A. Boy, The Handbook of Human-Machine Interaction, 2017
A great deal of attention has recently been devoted to the topic of “resilience engineering” (Hollnagel, Woods, and Leveson, 2006). This assumes that we should focus less on the causes of human error and more on the promotion of recovery actions, such as the application of undo in the previous desktop publishing example. Resilience engineering starts from the assumption that humans are not simply the cause of error, they act as a key means of mitigating failure in complex systems. This is a critical observation (Reason, 2008). For many years, developers have responded to the problem of human error in safety-critical systems by attempting to engineer-out the human involvement in these systems (Dekker, 2006). The argument is made that because even experts make mistakes we should minimize the opportunity for operator error to undermine safety. For this reason, engineers have worked hard to develop autonomous spacecraft, such as NASA’s DART or the European Space Agency’s Autonomous Transfer Vehicle, they have also developed automated systems that intervene for instance to apply automatic braking equipment in rail applications. However, these systems have had very mixed success. For instance, accidents, and collisions have been caused when automated braking systems have been inadvertently triggered. In other words, removing or restricting operator intervention tends to move the opportunity for error to other areas of the development lifecycle (Johnson, 2003). The end user may not be responsible for particular failures; however, errors tend to occur in the design and maintenance of the automated systems that are assuming new levels of control. It seems likely, therefore, that “human error” will remain a significant concern for the development of complex systems.
State of science: evolving perspectives on ‘human error’
Published in Ergonomics, 2021
Gemma J. M. Read, Steven Shorrock, Guy H. Walker, Paul M. Salmon
Once the environmental and other systemic factors were admitted into the causation of errors, the natural next step was to focus on the dynamic aspects of complex systems within which errors take place. Rasmussen’s (1997) model of migration proposed that behaviour within a system is variable within a core set of system constraints, with behaviours adapting in line with gradients towards efficiency (influenced by management pressure) and effort (influenced by individual preferences), eventually migrating over time towards the boundaries of unacceptable performance. More recently, resilience engineering has emerged to consider the intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances, so that it can sustain required operations under both expected and unexpected conditions” (Hollnagel 2014). This development led to a re-branding from Safety-I thinking (i.e. focus on preventing accidents and incidents) to Safety-II (understanding of everyday functioning and how things usually ‘go right’). While it has been emphasised that these views are complementary rather than conflicting, Safety-II advocates a much stronger focus on normal performance variability within a system, especially at the higher levels (e.g. government, regulators) who traditionally take a Safety-I view (Hollnagel et al. 2013).
From the design of green buildings to resilience management of building stocks
Published in Building Research & Information, 2018
Resilience engineering (not to be confused with the engineering definition of resilience) is specifically concerned with social–technical systems: ‘Resilience engineering is concerned with building systems that are able to circumvent accidents through anticipation, survive disruptions through recovery, and grow through adaptation’ (Madni & Jackson, 2009, p. 189).