Explore chapters and articles related to this topic
Safety of Sociotechnical Systems and Sociotechnical Work
Published in Torgeir K. Haavik, New Tools, Old Tasks, 2017
When an innovation is in the making, it is often full of controversies and its functionality and impact is still not unambiguously defined. Many central writings in science and technology studies (STS)16 (e.g. Bijker, 1995; Latour, 1988, 1996a; Shapin and Schaffer, 1985) have resulted from studies of science in the making, i.e. before it was black boxed.17 The notion of the black box is borrowed from cybernetics, where it denotes a device about which the users only need to know the input and the output. In ANT, it is a label that is put on phenomena (technologies, work processes, etc.) after their internal controversies have been silenced and their functionality and comprehension has become unambiguous (Latour, 1987).
Engineering Systems Concepts
Published in Graeme Dandy, David Walker, Trevor Daniell, Robert Warner, Planning and Design of Engineering Systems, 2018
Graeme Dandy, David Walker, Trevor Daniell, Robert Warner
In order to achieve an overview of the functioning of a system component it is not always necessary to study simultaneously the operation of each sub-component. In fact, by avoiding detail and by concentrating on the overall input-output relations among the various components, it is often possible to achieve a good overview of the behaviour of the system. When the detailed internal functioning within a component is ignored, and the component is regarded simply as a device, which transforms inputs into outputs, it can be thought of as a black box.
Engineering Systems Concepts
Published in Graeme Dandy, Trevor Daniell, Bernadette Foley, Robert Warner, Planning & Design of Engineering Systems, 2018
Graeme Dandy, Trevor Daniell, Bernadette Foley, Robert Warner
In order to achieve an overview of the functioning of a system component, it is not always necessary to study the internal functioning of the sub-components. In fact, by avoiding detail and by concentrating on the overall input-output relations among components, it is often possible to achieve a good overview of the behaviour of the system. When the detailed internal functioning within a component or system is ignored, and it is regarded simply as a device that transforms inputs into outputs, it can be thought of as a black box.
A grey production planning model on a ready-mixed concrete plant
Published in Engineering Optimization, 2020
Erdal Aydemir, Gokhan Yılmaz, Kenan Oguzhan Oruc
GST, which was introduced at the beginning of the 1980s by J. L. Deng, includes partly known and partly unknown information (Deng 1982). GST has become a very popular technique, with applications under grey uncertainties. One of its advantage is that it requires only a limited amount of data with poor information to estimate the behaviour of the system using statistical techniques. According to GST, any system has three situations for modelling: black, grey and white. Fundamentally, GST offers multiple solutions to solve uncertain problems. In contrast, a black-box system is the relationship between inputs and outputs which are known, but whose behaviour is unknown. If the system behaviour is entirely known, the system is called a white-box system. Thus, if the system behaviour is partly known and/or partly unknown, the system is called a grey (box) system (Deng 1982, 1989; Liu and Lin 2006). A comparison of grey systems, probability and statistics, and fuzzy mathematics is given in Table 1.
Nudging users towards better security decisions in password creation using whitebox-based multidimensional visualisations
Published in Behaviour & Information Technology, 2022
Katrin Hartwig, Christian Reuter
Often, cybersecurity measures are hard to understand for the average user. Interface design can be used to address that problem, for example by providing visual feedback on decisions in critical situations or displaying relevant information (Boyce et al. 2011). When visualising information, people ’s preferences regarding transparency differ. While for some people it is sufficient to see an aggregated output of a calculation, others will not trust the output unless they can comprehend how it was generated. For the latter, algorithmic transparency is essential. There are mainly two different approaches to explain algorithms: black-box and white-box approaches (Cheng et al. 2019). In a black-box approach, the user can observe the input and the output but not what happens in between. As a result, that may lead to reactance for those who need more information. On the other hand, black-box approaches are less likely to cause information overload. White-box approaches enable the user to understand how the output was generated. Hence, the underlying basis for output generation is transparent to the user. That is beneficial especially for individuals that are otherwise likely to feel reactance or mistrust (Hartwig and Reuter 2019, 2020). However, others might feel overwhelmed by too much information (Kaufhold et al. 2020). Explainable algorithms have emerged to be a highly relevant field of research, for instance, to understand how decisions in health and finance are made by machine learning techniques (e.g. Abdul et al. 2018; Cheng et al. 2019; Wang et al. 2019). In the context of nudging, calculations are commonly based on less complex methods. Therefore, giving transparent (and, thus, whitebox-based) reasons can often be accomplished by simply disaggregating the calculated information and accordingly visualising relevant dimensions in a comprehensible way.