Explore chapters and articles related to this topic
Recreating Efficient Framework for Resource-Constrained Environment: HR Analytics and Its Trends for Society 5.0
Published in Kavita Taneja, Harmunish Taneja, Kuldeep Kumar, Arvind Selwal, Eng Lieh Ouh, Data Science and Innovations for Intelligent Systems, 2021
Kamakshi Malik, Rakesh K. Wats, Aman Khera
The problem of overfit and algorithmic bias leads to misrepresented analysis and subsequent prejudiced conclusions, thus failing the very purpose of HR analytics. Over-fit happens when a model starts catching the noisy or irrelevant and inaccurate values in the data and fits more data than it actually needs. Due to these large data sets, the machine learning detects patterns that do not have content validity or are spuriously significant, as a result, the efficiency and accuracy of the model decreases. Algorithmic bias occurs when the algorithm is poorly trained and inaccurately includes or excludes data. The more categorised data an algorithm sees, the better job it performs. The trade-off to this strategy, though is that deep learning algorithms, depending on what is lacking or too abundant in the data on which they are trained, can form weak spots. For example, a photo-app developed by Google, erroneously tagged a photo of two black people as gorillas because its algorithm had not been trained with adequate images of people with dark-colored skin.
What Is Augmented Intelligence?
Published in Judith Hurwitz, Henry Morris, Candace Sidner, Daniel Kirsch, Augmented Intelligence, 2019
Judith Hurwitz, Henry Morris, Candace Sidner, Daniel Kirsch
When is augmented intelligence superior to either human intelligence or machine intelligence alone? We assert the following three principles on why machines alone are not the future for business success: There are limits to how well humans can understand the content and scope of their data. There are limits to how much data an individual can absorb and understand. For example, these limitations are evident when analyzing large data sets, streaming data, and complex unstructured data. Machine intelligence can supplement these capabilities to help people understand data.Humans must also make the decision regarding where and when to deploy automation and machine learning. There are a variety of considerations that need to be understood. First, when does it make economic sense to apply machine learning techniques? Second, does the organization have the right data and the right infrastructure to support a major change? Third, decision makers need to understand the strategic intent of the platform. Complex decision making will require humans to collaborate with machine intelligence. In all situations, humans are responsible for providing governance and controls to address machine intelligence limitations, such as algorithmic bias.Combining human and machine intelligence in a redesigned business process can overcome the limitations of humans or machines acting alone to produce the best outcomes. Humans are the ones that need to use their knowledge to redesign business processes because the technology is not going to make the difficult decisions to transform the business.
Data science ethics
Published in Benjamin S. Baumer, Daniel T. Kaplan, Nicholas J. Horton, Modern Data Science with R, 2021
Benjamin S. Baumer, Daniel T. Kaplan, Nicholas J. Horton
Biased data may lead to algorithmic bias. As an example, some groups may be underrepresented or systematically excluded from data collection efforts. (D’Ignazio and Klein, 2020) highlight issues with data collection related to undocumented immigrants. (O'Neil, 2016) details several settings in which algorithmic bias has harmful consequences, whether intended or not.
AI ethical biases: normative and information systems development conceptual framework
Published in Journal of Decision Systems, 2022
AI bias, as a deviation from the normal can happen at any stage of the AI application development process. After conducting a narrative literature review, several AI-related biases were identified. Scholars grappling with issues around AI biases, have generally used the concept of algorithmic bias to cover the ethical issues across the ISD process (Floridi & Taddeo, 2016). Algorithmic bias is viewed as a discriminatory case of algorithmic outcomes with the potential to adversely impact entities due to inaccurate modelling that misrepresents the associations between feature variables and outcome variable (Rozado & Schwieren, 2020; Tsamados et al., 2021). Generally, an algorithmic bias can emanate from an underlying dataset, inadequate methodological approaches or societal factors (Akter, McCarthy et al., 2021; Walsh et al., 2020). The various algorithmic biases gleaned from the literature are summarised in Table 2.
Six Human-Centered Artificial Intelligence Grand Challenges
Published in International Journal of Human–Computer Interaction, 2023
Ozlem Ozmen Garibay, Brent Winslow, Salvatore Andolina, Margherita Antona, Anja Bodenschatz, Constantinos Coursaris, Gregory Falco, Stephen M. Fiore, Ivan Garibay, Keri Grieman, John C. Havens, Marina Jirotka, Hernisa Kacorri, Waldemar Karwowski, Joe Kider, Joseph Konstan, Sean Koon, Monica Lopez-Gonzalez, Iliana Maifeld-Carucci, Sean McGregor, Gavriel Salvendy, Ben Shneiderman, Constantine Stephanidis, Christina Strobel, Carolyn Ten Holter, Wei Xu
AI-induced bias is also a topic much discussed in research, but also in public discourse regarding fairness and ethics, but also regarding universal accessibility of AI. Automation bias occurs when users “over-trust” or inappropriately trust the automated decision making, but also expands beyond explainability to plausibility, reliability, predictability, and intervention possibility of the automated system (Strauß, 2021). Algorithmic bias, on the other hand, refers to systematic deviation in the algorithm output, performance, or impact, relative to some norm or standard, and can result in moral, statistical, social bias, or other, depending on the normative standard used as a reference point (Fazelpour & Danks, 2021).