AI &Ethics Series part two- 5 Reasons Why we do not need more AI ethical guidelines

dwijendra dwivedi
6 min readDec 8, 2021

Organizations are quickly adopting artificial intelligence (AI) in “innovate-to-survive’’ orinnovate-to-grow” race across industries. AI is even getting fast acceptance even in sectors such as banking that are highly regulated. AI is also shaping the healthcare sector where connected healthcare and AI are transforming the lives of communities. AI has outperformed humans in complex visual tasks and games. These immense successes of AI systems mainly became possible through improvements in deep learning methodology, the availability of large databases, and the processing capabilities with enhanced GPU and TPUs. AI inherently carries a high risk at the same time. It is spread over all the stages of the AI solution development life cycle and components of the AI system. The risk such as that pertaining to discrimination, bias, or an illogical outcome might continue to lurk in industries using AI.

In recent years, many private companies, research institutes and public sector agencies have published ethics and related guidelines to be pursued while researching, developing and commercializing AI-based solutions. . A set of ethical guidelines has been developed in recent years to gather the principles to which technology developers should adhere as much as possible. However, the critical question arises: have these ethical guidelines any real impact on human decision-making in the area of AI and machine learning? The short answer is: No.. We also observe an increase in efforts around AI’s ethical, societal and legal impact. National and transnational governing bodies have also established guidelines and ethical initiatives to provide recommendations, standards and policy suggestions to support AI system development, deployment and usage. By the November 2021, 93 UNESCO member countries have adopted the first comprehensive agreement on the ethics of artificial intelligence.

Although there is a wealth of ethical guidelines for AI, these guidelines remain separate from each other. It is therefore difficult for those involved in the development or use of AI to determine which ethical questions they should be aware of. It has been shown that there is a large degree of convergence in terms of the principles that guidance documents are based on (Jobin et al., 2019). The quickly growing set of tools that are being developed and provided to address AI ethics are often difficult to map with regards to the categories or principles they could help to address. (Morley et al., 2019).

1) High level and lack of detail: Ethical guidelines need to be translated for actions and recommendations within organizations. A further issue of AI ethics guidelines is that they have aimed at a range of stakeholders: not only policymakers, users, and developers, but also educators, civil society organizations, industry associations, professional bodies and more, as a result, current guidelines are often difficult to understand and are not drafted for technical users who are a group of key users. All stakeholders need to be well informed of AI ethics, roles and responsibilities. Furthermore, the guidelines should be clearly documented for the various stakeholders concerned.

2) Serve as a marketing strategy at present: Ethical guidelines for the AI industry are used to suggest to legislators that internal autonomy in science and industry is adequate. It seems that businesses are only buying time with the ever-imminent threat of increased government regulation rather than understanding the importance of AI ethics and the need for regulation.

3) Less impact on the software development lifecycle: AI ethics is rarely formally implemented in software projects. It is conceded as surplus or add-on, unbinding framework imposed from outside. Vakkuri (2019) believes that software designers must implicitly integrate ethical values into the technologies they create. The field has not sensed the need to address ethical concerns and it has not been part of the education system. Given the growing need, the ethics should be given more prominence and formally integrated into the SDLC cycle.

4) Lacks mechanisms to reinforce: AI ethical guidelines lack mechanisms to reinforce the best practices that need to be followed. Researchers, politicians, consultants, managers and advocates must deal with this essential weakness in ethics. There should be a foundation of mechanisms to put them into practice and strengthen those ethics.

5) None of the guidelines address all the key issues completely: Hagendorff (2020) compared 22 of the major guidelines and the summary is that none address all the key AI ethical issues completely. Most ethical recommendations take a broader top-down approach rather than being more specific and bottom-up. Key AI ethical guidelines that he compared are following. (Pekka et al, 2018),(Holdren etal. 2016), (Beijing Academy of Ar­ficial Intelligence 2019), (Organisa­on for Economic Coopera­on and Development 2019), (Brundage et al. 2018), (Floridi et al.2018), (Future of Life Ins­titute 2017), (Crawford et al. 2016), (Campolo et al. 2017), (Whiaker et al. 2018), (Crawford et al. 2019), (Diakopoulo s et al.), (Abrassart et al. 2018),(OpenAI 2018), (The IEEE Global Initiative on Ethics of AutonomusAnd Intelligent Systems 2016), (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019), (Informa­tion Technology Industry Council2017), (Microsoft Corporaion 2019), (DeepMind), (Google 2018), (Cutler et al.2018) and (Partnership on AI 2018).

Most common AI Ethical Issues: The most frequent and common AI ethical issues out of these 22 published guidelines are mainly:

privacy protection

fairness, non-discrimination, justice

Accountability

transparency, openness

safety, cybersecurity

Summary:

As a trend, AI is being implemented across all industries and domains. Due to the enormous success of AI and its popularity, the risk is, therefore, high. While there is an abundance of ethical guidelines on AI, but they are voluntary and do not provide details for implementation. Therefore, it is challenging for those involved in the development and use of Al to understand their legal and ethical position at the same time, though, there is need to identify parties whose responsibilities are to be determined in enforcing, supervising and management of these ethics in AIs. This means that all stakeholders need to be aware of AI’s ethical roles and responsibilities. Mechanisms for reinforcing best practices are missing. The guidelines, so far, miss a stronger focus on technological details of the various methods and technologies in the field of AI and machine learning. If a stronger focus on technological details is placed, this might serve to close the gap between ethical and technical discourses. Are we looking for an independent institution to regulate AI? We’ll talk about that in the next article.

References:

1. Cire¸san, D., Meier, U., Masci, J., Schmidhuber, J.: A committee of neural networks for traffic sign classification. In: International Joint Conference on Neural Networks (IJCNN), pp. 1918–1921 (2011)

2. Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds & Machines 30, 99–120 (2020). https://doi.org/10.1007/s11023-020-09517-8

3. Vakkuri, V., et al.: Ethically aligned design of autonomous systems: industry viewpoint and an empirical study. Preprint arXiv:1906.07946 (2019)

4. Ryan, M. and Stahl, B.C. (2021), “Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications”, Journal of Information, Communication and Ethics in Society, Vol. 19 №1, pp. 61–86. https://doi.org/10.1108/JICES-12-2019-0138

5. Jobin, A., Ienca, M. & Vayena, E. The global landscape of AI ethics guidelines. Nat Mach Intell 1, 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2

6. Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nat Mach Intell 1, 501–507 (2019). https://doi.org/10.1038/s42256-019-0114-4

7. Morley, J., Floridi, L., Kinsey, L. et al. From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Sci Eng Ethics 26, 2141–2168 (2020). https://doi.org/10.1007/s11948-019-00165-5

The opinions expressed in this article are the author’s own and do not reflect the view of the any organization that the author has worked / working with.

--

--

dwijendra dwivedi

Head of AI & IoT EMEA & AP team at SAS | Author | Speaker