Explore chapters and articles related to this topic
Know Where to Start – Select the Right Project
Published in James Luke, David Porter, Padmanabhan Santhanam, Beyond Algorithms, 2022
James Luke, David Porter, Padmanabhan Santhanam
In Chapter 9, the AI Project Assessment Checklist will take your AI idea and consider it against the five key pillars of successful AI deployments: Business Problem: a key part of any complex project. Clearly defining the scope is important, however in an AI project we also need to consider how the impact of the AI will be measured, the skills you will require to deliver and how the new capability will be integrated into the business process.Stakeholders: as mentioned in the introductory chapter, AI solutions will impact society to a much greater extent than previously. In addition to managing internal Stakeholders within an enterprise, the values and beliefs of a whole range of external Stakeholders, from regulators to customers, need to be managed.Trust: for AI to be successful it really does need to be trusted … unless you are a James Bond villain of course. Trust is not something you can specify; it’s up to your consumers to decide whether to trust the AI. However, you can aim to develop Trustworthy AI by considering important factors including accuracy, ethics, bias mitigation, explainability, robustness and transparency.Data: it’s all about the data, so any project evaluation will need to consider privacy, availability, adequacy, operations and access to domain expertise.AI Expectation: what is the real necessity driving the project and has this type of thing been done before? What is the true scope (again) of the application and is it really feasible (a more thorough version of the step 1 evaluation outlined above)? Finally, what other complexity factors exist and what are your hopes regarding reusability?
Ethical rules
Published in Vahap Tecim, Sezer Bozkus Kahyaoglu, Artificial Intelligence Perspective for Smart Cities, 2023
For example, the European Commission emphasised in its AI Strategy published in 2018 that, it is necessary to be prepared socioeconomically and financially for the AI transformation that will take place in the European Union (EU). It is also pointed out that the legal and ethical infrastructure should be ready for AI technology. It is possible to list some of the various documents published by the EU with ethical principles to draw a roadmap as follows:“Position on Robotics and Artificial Intelligence” – The Greens (Green Working Group Robots)“Report with recommendations to the Commission on Civil Law Rules on Robotics” – European Parliament.“Ethics Guidelines for Trustworthy AI” – High-Level Expert Group on Artificial Intelligence“An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations” – AI4People“EU European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment Council of Europe” – European Commission for the Efficiency of Justice (CEPEJ)“EU Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems” – European Commission“Guidelines on Artifıcial Intelligence and Data Protection” – Council of EuropeIn parallel with these documents, AI systems should be lawful, ethical and robust to be considered trustworthy. For instance, the “Ethics Guidelines for Trustworthy AI” document provides recommendations on improving auditing for AI technologies. Thus, in the report “Guidelines on Artificial Intelligence and Data Protection”, published in 2019, the Council of Europe (2019) states that protecting human rights, fundamental freedoms and personal data is essential while developing AI applications. In addition, it is stated that the functioning of democracies’ social and ethical values should not be neglected, and risks should be minimised by considering them within the scope of “responsible innovation”.
The Use of Responsible Artificial Intelligence Techniques in the Context of Loan Approval Processes
Published in International Journal of Human–Computer Interaction, 2023
Erasmo Purificato, Flavio Lorenzo, Francesca Fallucchi, Ernesto William De Luca
A significant contribution in this direction has been provided by the High-Level Expert Group on AI (AI-HLEG, 2019), appointed by the European Commission, that presented the document “Ethics Guidelines for Trustworthy Artificial Intelligence.” As the guidelines’ authors note, the concept of trustworthy AI is made of three main components: compliance with existing laws and regulations (lawful AI); alignment with society’s ethical principles, even in those situations in which no regulation has been developed yet (ethical AI); robustness both from a technical and social perspective to avoid incorrect behaviors that may cause unintentional harm (robust AI). The AI HLEG group identifies four ethical principles that must be satisfied for an AI system to be considered trustworthy: respect for human autonomy, prevention of harm to other human beings, fairness of the AI system’s decisions, and explicability of the outcome of an AI system.