Explore chapters and articles related to this topic
Tool Support for Requirements Engineering
Published in Phillip A. Laplante, Mohamad H. Kassab, Requirements Engineering for Software and Systems, 2022
Phillip A. Laplante, Mohamad H. Kassab
Traceability is especially relevant when developing safety-critical systems and therefore prescribed by safety guidelines, such as DO-178C, ISO 26262, and IEC61508. For example, the standard DO-178C, Software Considerations in Airborne Systems and Equipment Certification, which is used by federal aviation regulatory agencies in the US, Canada, and elsewhere, contains rules for artifact traceability. DO-178C requires traceability between all low-level requirements and their parent high-level requirements. Links are also mandatory between source code elements, requirements, and test cases. All software components must be linked to a requirement, that is, all elements must have a required purpose (i.e., no gold-plating). Certification activities are then conducted to ensure that these rules are followed (RTCA 2011). But for our purposes, we are only concerned with traceability artifacts found in the requirements specification document (or ancillary documents).
S
Published in Phillip A. Laplante, Dictionary of Computer Science, Engineering, and Technology, 2017
safety-critical a system whose failure may cause injury or death to human beings. For example, an aircraft or nuclear power station control system. Common tools used in the design of safety-critical systems are redundancy and formal methods.
Operating Documents that Change in Real-time: Dynamic Documents and User Performance Support
Published in Guy A. Boy, The Handbook of Human-Machine Interaction, 2017
Barbara K. Burian, Lynne Martin
Certification for any product confirms that it meets certain performance and quality requirements. Performance criteria may include efficiency or a product’s suitability for its intended use. In the case of safety-critical systems like some dynamic documents, certification may include the degree to which safety criteria are met and testing results are robust. Certification is usually achieved through an assessment by an agent or organization outside of the design and production company—in the field of aviation in the United States this organization is the Federal Aviation Administration (FAA). Thus, certification demonstrates that a third, disinterested party considers that the product meets certain specified criteria.
A human–machine interaction design and evaluation method by combination of scenario simulation and knowledge base
Published in Journal of Nuclear Science and Technology, 2018
Zhanguo Ma, Hidekazu Yoshikawa, Amjad Nawaz, Ming Yang
Human–machine interaction (HMI), which is recognized as essential for process safety, quality, and efficiency, comprises all aspects of interaction and communication between human (users) and the machines via human–machine interfaces. The term ‘machine’ indicates any kind of designed system such as automation which is denoted as the supervision and control system [1]. Automation achieves the better goals such as greater safety, much better-quality control, cost saving as well as liberating human from laborious work. However, automation leads to the reduction of operator system awareness and manual skills while increasing monitoring workload [2]. Although the current design, especially for the nuclear power plants (NPPs), employs the passive safety design which tries to exclude the human from the safety control system. However, in case of automation failure, the human factor becomes critical in the safety critical systems to cope with the accident. This is the famous argument of ‘ironies of automation’ by Bainbridge in 1980s [3]. Therefore, the HMI should be effectively designed to achieve that the human harmonizes with automation to accomplish the safety and efficiency in the complex and typically large-scale systems such as NPP, aircraft control, and manufacturing plants.
The Konect value – a quantitative method for estimating perception time and accuracy for HMI designs
Published in Behaviour & Information Technology, 2018
Marie-Christin Harre, Sebastian Feuerstack
A human operator monitoring a safety-critical system, such as an airplane, a power plant or a semi-automated vehicle must be able to detect problems or errors occurring in the system as fast and accurately as possible to initiate countermeasures in time and eliminate the risk of negative impacts. For this reason, HMI designs for human monitoring tasks in the safety-critical domain try to minimise the time that a human needs to become aware of unexpected events and also try to ensure that the relevant information can be perceived correctly. Both aspects have to be validated before the system is used in real operation. However, testing such HMI designs is a challenging and complex task since user testing is quite difficult in safety-critical domains. Testing critical situations is typically performed in simulated environments, such as a driving simulator to ensure safety of subjects. Measuring HMI designs requires that they are implemented and integrated into the simulation environment, which can be an expensive process. Additionally, operators of safety-critical systems represent a group of highly trained professionals that are often rare and quite expensive. HMI design is typically done in several design-evaluation iterations to gradually improve the HMI, which further raises costs and efforts, especially if a new simulator study is performed for each cycle to measure the metrics. In some cases, this leads to the problem that tests are solely performed at the end of the process with functional systems. This means that design problems can only be discovered near the end of the development process, after deployment of a functional system. This is a late stage to discover issues, and costs will be associated with late discovery.
Review of battery powered embedded systems design for mission-critical low-power applications
Published in International Journal of Electronics, 2018
Matthew Malewski, David M. J. Cowell, Steven Freear
Mission critical systems are ones that are required for the successful completion or operation of a system. Safety-critical systems are ones that could potentially incur loss of life if a failure mode occurs (Fowler, 2009). Embedded systems for these applications are typically required to be operational with almost zero downtime. To minimise the downtime of these systems several techniques are used, examples include redundancy and watchdog timers (WDT). Further challenges are presented by the operational, environmental, and implementation constraints that these systems typically operate in.