Two schools of thoughts for Responsible AI: which one you subscribe ?
Responsible Artificial Intelligence deals with human responsibility for developing intelligent systems along fundamental humanitarian principles and values. There are two schools of thoughts on how AI could be more responsible. Let us understand both and see what do I recommend.
Regulatory School of thought:
The Regulatory School of thought is one of the largest political economics schools. Its origins date back to the early 1970s in France, when the economy was a wreck and there was a great deal of economic instability. Its founder, Destanne de Bernis, coined the term regulation, and his goal was to use the concept as a systems theory to update Marx’s economics. Members of this school are generally influenced by structural Marxism, the Annales School, and the Frankfurt School. Their goal is to explain the emergence of new forms of capitalism as tensions between the existing arrangements, which are governed by the market. The Regulatory School of thought aims to understand and analyze regulation as a process. According to Jessop, regulation is the regularization of economic activity, and it seeks to integrate political economy, civil society, and state to create better social conditions. Regulators are often viewed as defenders of the free market, but they are the ones who will benefit the most from the regulatory system. However, the RA focuses on the negative effects of regulation.
Responsible AI means different things to different people. Recent years have seen concerted action by national and transnational governance bodies, including the European Union, the OECD,the UK, France, Canada and others, The first school advocates the use of ethics and responsibility as primary concerns, whereas the second supports a less formalized approach. For instance, Floridi argues that every actor involved in data science and data engineering should bear responsibility for the consequences of their actions. He calls for the use of a backpropagation algorithm from Deep Learning and distributed responsibility to create a model that can be accountable for its choices.
Regulation of AI technology is crucial in ensuring the ethical behavior of AI developers and users. While some applications of AI have long been regulated, others fall outside this framework and may need to be amended to ensure the protection of human rights. For example, autonomous vehicles should not be permitted on public roads until the allocation of liability and responsibility is agreed upon. And unless companies developing AI systems provide adequate information to consumers about their decision-making process, there is a risk of harming other road users. Regulation of AI technology is important for several reasons. It is an emerging technology, which is more expensive and has many risks than traditional technologies. As such, it is essential to control the risks associated with it. The use of AI in many industries is growing. For example, AI can be used to detect cancer or reduce airplane collisions. Furthermore, it can be implemented into nonautonomous cars, so there are many potentials uses for the technology. But there are also concerns that the development of AI technology will undermine the trust of the public. It may even be biased, despite being trained to be objective. It may also be less accurate and cost-effective than a human-based solution. This is why AI regulations must focus on the new risks that AI technologies pose, rather than the old ones. In addition to the ethical challenges, regulation must concentrate on the risks associated with artificial intelligence.
Trust and self-regulation by Industry:
The Partnership on AI is an industry-led attempt to develop shared standards and governing models for artificial intelligence. This is a relatively new field, and so there are still many questions that remain unanswered, including whether the AI industry has a responsibility to self-regulate. Among other things, there is no formal system for the AI industry, and there is a lack of empirical data to guide the process. The partnership does not have the resources to enforce its own rules, but it does seek to promote public understanding and aspirational efforts. Another concern is that government regulators are not technically competent to oversee AI. While this can create problems, it can also lead to an ineffective regulatory structure. One of the challenges of regulating AI is the fact that its scale makes it more difficult to regulate across borders. The problem is that there are few resources for establishing such a mechanism. The government has few resources to devote to self-regulation. Furthermore, it is difficult to recruit and retain qualified staff.
The Partnership on AI for the Benefit of Society was launched on 28 September 2016 by representatives from the technology companies Amazon, Google, IBM, DeepMind, and Microsoft. The organization was criticized by some for not avoiding self-regulation and focusing instead on the development of standards. Nonetheless, it did not avoid the problem entirely. In fact, it made it clear that it is willing to collaborate with other companies to develop a standard for AI ethics.
What do I recommend:
Let us build trust in everything we do. We may inherit bias from the sources of the data as the data generation source systems could itself be biased like us. We may have some bias in the twitter or Facebook opinions we create and contribute. But the AI systems need to identify and mitigate it as the Machine learning models can perpetuate existing biases. At the same time, we may need to establish corporate governance to comply to responsible AI regulations and end-to-end internal policies to mitigate bias. So, the sort of answer is BOTH!!!.