Navigating the uncharted territory of AI

Navigating the uncharted territory of AI

Artificial Intelligence (AI), alias robots, has captured the public imagination. Yet, the term "AI" can cover more than robots. Not only does AI embody technology that learns and digests data, exhibiting intelligence paralleling humans, but it is also now a major creative force. Generative AI can offer projections, predictions and propositions, potentially surpassing human intelligence. This leads to the question -- should AI be regulated, and how?

Any answer should be humble yet progressive. While humanity can appreciate AI's contributions, for example, concerning medical services, economic growth, and labour-related efficiency, caution is invited on various AI-related risks such as discrimination, negative profiling, and deceptive illusions such as deep fakes. There might be threats to human safety, especially if AI is beyond human control.

Currently, some regulation is emerging in parts of the globe, but there is still a debate as to the preferred options -- whether to opt for "hard law", such as legally enforceable regulations accompanied with sanctions for breaches; "soft law", such as a persuasive code of ethics or guiding policy framework; or a mixture of both "hard and soft". There are additional portals for consideration: multilateral, regional and national entry points.

At the multilateral level, currently, there is no legally enforceable treaty on the issue. The nearest to an international agreement is the 2021 Unesco Recommendation on the Ethics of AI. It veers towards the "soft law" approach, guiding the world community with various general ethical principles. Those principles include the need for safety and security, respect for human rights and privacy, explainability and transparency of the AI in its relationship with the consumer, and the assurance of human control over the AI. A smaller multilateral organisation -- the Organization for Economic Cooperation and Development -- has proposed similar ethical guidelines since 2019.

At the regional level, developments are much more concrete. In Europe, there are two regional initiatives. The most prominent is the emerging AI Act of the European Union (EU). With this key draft law now vetted by the European Parliament, it could become a regional law, legally enforceable later this year. It veers towards the "hard law" approach accompanied by specifics, varying from listing various prohibited AI practices to the imposition of guardrails or conditions to deal with high-risk AI if they are to be allowed. At the top end is the list of prohibited AI, such as that which can play with emotions and motivate children to be violent, and social scoring, whereby data are used to classify people as trustworthy or not, thus leading to discrimination.

Then, there are the high-risk AI practices, which will need due diligence measures and risk mitigation. These include remote biometric identification systems, such as facial recognition technology, in public spaces and real-time biometric surveillance, such as near-immediate recording of live action, especially used by law enforcers, all of which may lead to discrimination against persons under scrutiny.

In the EU, monitoring agencies are to be set up at the national level, complemented by a regional oversight mechanism. There are to be precautionary measures, such as the need to notify the authorities of the use of various types of AI, and corrective measures, including sanctions with hefty fines. As for other AI not falling under the guardrails above, the preferred option for monitoring AI is through industry self-regulation, with codes of ethics and vetting by the industry itself.

The other regional initiative is the Council of Europe, the intergovernmental organisation covering human rights in Europe. It is now drafting a treaty with key principles to be followed, such as safety, respect for privacy, and non-discrimination, with a "hard-soft" approach.

Interestingly, Asean will have an AI plan or roadmap by next year. It is likely to be driven by the desire of some Asean countries to lead the AI race and be a hub for economic and social development.

What about the national level? Many countries are in an experimental phase. Several are moving towards national policies with ethical guidelines as a first step. Thailand's national AI Plan began last year; it is a five-year plan complemented by some ethical guidelines. By contrast, a major Asian country has taken a more targeted approach to making algorithms (namely, data-related instructions leading to specific outcomes) transparent by setting up an algorithm registry. It has adopted a regulation to counter the "deep fakes" that may result from AI, with a more comprehensive AI law in future.

Yet, a major concern in Asia is that technology is already being deployed extensively for surveillance against those seen as opponents of those in power.

By contrast, the US federal system is open to various options. Over 20 states have AI legislation waiting to be adopted -- many concern the setting up of agencies to supervise AI. Another angle is exemplified by a proposed law in California on AI transparency, which stipulates the right of consumers to know the extent to which AI is being used on them. Another US state is proposing legislation against AI where its actions can result in discrimination. The executive branch has also been dialoguing with companies investing in AI to commit to various ethical rules, such as safety and security, which indicates a "soft law" approach.

Future attention must be paid to how AI or robots learn before they create. This depends upon data sets used for AI training. There is a need to ensure AI is balanced, especially concerning gender, ethnicity and other characteristics. It is linked with algorithms, now with the competition for more powerful chips, namely graphic processing units. There is also the issue of data copyright and the grey area between human data and synthetic data. Ultimately, there is a control factor.

While all approaches seem to agree that humans must exert final control over AI, the question remains -- which humans? Imperatively, human control should be exercised by the democratic hand rather than the authoritarian brand.

Vitit Muntarbhorn

Chulalongkorn University Professor

Vitit Muntarbhorn is a Professor Emeritus at the Faculty of Law, Chulalongkorn University, Bangkok, Thailand. He has helped the UN in a number of pro bono positions, including as the first UN Special Rapporteur on the Sale of Children, Child Prostitution and Child Pornography; the first UN Special Rapporteur on the Situation of Human Rights in the Democratic People’s Republic of Korea; and the first UN Independent Expert on Protection against Violence and Discrimination based on Sexual Orientation and Gender Identity. He chaired the UN Commission of Inquiry (COI) on Cote d’Ivoire (Ivory Coast) and was a member of the UN COI on Syria. He is currently UN Special Rapporteur on the Situation of Human Rights in Cambodia, under the UN Human Rights Council in Geneva (2021- ). He is the recipient of the 2004 UNESCO Human Rights Education Prize and was bestowed a Knighthood (KBE) in 2018. His latest book is “Challenges of International Law in the Asian Region”

Do you like the content of this article?
COMMENT (2)