IBM insists organisations work to build public trust

IBM insists organisations work to build public trust

A recent IBM study revealed data privacy concerns and trust/transparency concerns are the biggest inhibitors of generative AI, according to IT professionals at large organisations.
A recent IBM study revealed data privacy concerns and trust/transparency concerns are the biggest inhibitors of generative AI, according to IT professionals at large organisations.

Companies will not be able to adopt artificial intelligence (AI) at scale if they are irresponsible about the technology and governance frameworks, says technology firm IBM.

"The speed, scope and scale of generative AI is unprecedented," said Catherine Lian, general manager and technology leader for IBM Asean.

An IDC survey suggests at least a quarter of 2,000 global companies credit their AI capabilities with contributing to more than 5% of their earnings.

AI investments are likely to continue growing as around 50% of the surveyed firms plan to increase AI spending, viewing it as a strategic imperative.

Gartner predicts the global market for AI software will reach almost US$135 billion by 2025.

According to PwC, AI is projected to unlock an astounding $16 trillion in value by 2030.

AI's impact on society continues to raise concerns, meaning organisations wishing to employ AI to unlock new value have a fundamental responsibility to foster public trust in the technology, according to IBM.

"Without responsible AI and an AI governance framework, companies won't be able to adopt AI at scale," said Christina Montgomery, vice-president and chief privacy and trust officer at IBM.

She cited IDC's CEO Sentiments Survey that lists the top three concerns among senior management executives as being digital skills, a lack of business trust in IT, and a lack of digital business know-how among top executives.

A recent IBM Institute for Business Value study revealed data privacy concerns (57%) and trust/transparency concerns (43%) are the biggest inhibitors of generative AI, according to IT professionals at large organisations that are not exploring or implementing generative AI.

Minimising bias (87%), maintaining brand integrity and customer trust (85%) and meeting regulatory obligations (81%) are considered the most important aspects in terms of building trust in AI.

In Southeast Asia, the generative AI market is expected to see significant growth in the coming years, with one forecast anticipating a compound annual growth rate of 24.4% from 2023 to 2030.

According to Oxford Insights, six Asean countries -- Singapore, Malaysia, Thailand, Indonesia, Vietnam and the Philippines -- are above the global average when it comes to government AI readiness scores.

According to an IBM report on the cost of a data breach in 2023, the potential costs of non-compliance are staggering and extend far beyond simple fines.

Organisations in Asean have lost an average of $3.05 million in revenue from a single non-compliance event.

This is only the tip of the iceberg -- the financial impact goes far beyond the bottom line, said Ms Montgomery.

Any proposed legislation should consider the different roles of AI creators and deployers and hold them accountable in the context in which they develop or deploy AI, she said.

IBM believes the purpose of AI is to augment human intelligence, data and insights belonging to each creator, and the technology must be both transparent and explainable, said Ms Montgomery.

Using AI that is trustworthy by design can help scale AI, while mitigating risks, she said.

The IBM Institute for Business Value's AI Ethics in Action survey found 75% of executives view ethics as a source of competitive differentiation.

Emerging AI risks

Generative AI carries various potential risks, including output bias, copyright infringement, AI hallucination, and impacts on education, Ms Montgomery said.

Some risks are intensified by generative AI, such as data bias, data privacy rights, improper usage, and impacts on jobs and human exploitation, she said.

Deepfakes represent one of the most pressing challenges posed by generative AI, said Ms Montgomery.

Policymakers should work to mitigate the harm caused by deepfakes. To protect polls, policymakers should prohibit the distribution of materially deceptive deepfake content related to elections, she said.

Candidates targeted by materially deceptive AI-generated content should be able to seek damages or have deceptive content removed, said Ms Montgomery.

Policymakers should also protect people's privacy by creating strong criminal and civil liability for people that distribute non-consensual intimate audiovisual content, including AI-generated content, as well as for people that threaten to use this form of content.

Do you like the content of this article?
COMMENT