National AI ethics going to cabinet

National AI ethics going to cabinet

TECH

The cabinet will be asked to endorse the country's first artificial intelligence (AI) ethics guidelines by December to ensure the proper use of this advanced technology, says the National Digital Economy and Society Committee (NDESC).

Speaking at an AI ethics forum yesterday, NDESC secretary-general Vunnaporn Devahastin said AI is useful for many aspects and it is vital to find a framework for its proper use.

AI is a powerful tool that can be used both positively and negatively, she said.

AI ethics principles have been approved by the NDESC, which is chaired by the prime minister, said Ms Vunnaporn. The NDESC office plans to ask the cabinet to endorse these guidelines for public use of AI by next month. The NDESC operates under the Digital Economy and Society Ministry.

"Energy, agriculture and healthcare are three key sectors that can leverage AI and we need to educate all parties in using AI morally," she said.

Ethics guidelines for other digital technologies could be rolled out in the future.

The AI ethics guidelines cover six principles.

First, AI needs to be used to support competitiveness and sustainable development.

Second, it must take laws, ethics and international standards into account. Accordingly, AI needs to be researched, developed, designed and used in service in ways that comply with laws, norm and ethics, and fend off human rights violations and privacy breaches.

AI should not be used to frame human destiny while design of it must cater to humans as a centre, said Ms Vunnaporn.

The third concerns transparency and accountability. All stakeholders, ranging from researchers and designers to developers to users, need to have accountability, with AI engagement through their roles.

The fourth involves security and privacy. AI, she said, must not be developed to commit fraud or cause threats to others. AI should have mechanisms that allow humans to interfere with its system to prevent harm, said Ms Vunnaporn.

The government should also work with other countries to stem the development of AI technology to automate weapons, she said.

The fifth concerns diversity and broad coverage without monopoly or discrimination. The final principle involves reliability, as AI technology must be reliable and create public confidence in its use.

Ms Vunnaporn said AI should also have the ability to gather and process user feedback.

Speaking at the same forum, Ome Sivadith, national technology head of Microsoft Thailand, said that company has a special unit responsible for ensuring AI development is ethical.

Three aspects need to be carefully considered when AI is developed, he said. The first concerns the decision to grant loans and the second is linked to health risks. The last is human rights violation risks, such as facial recognition.

Tee Vachiramon, chief executive of Sertis, a Bangkok-based data service analytics firm, said the company has an AI board that thoroughly considers AI projects that could have ethical risks, such as those in connection with the military.

Do you like the content of this article?
COMMENT (2)