Managing the double-edged sword of AI
text size

Managing the double-edged sword of AI

Reputational risk from unintended consequences of AI is one area businesses need to deal with. By Conall McDevitt

Artificial intelligence (AI) is not new. Neither is reputational risk. While corporations have been using AI for some time now, most of it has been unseen in data analytics, predicting customer behaviour, sales and marketing, or operations.

Most of the time, clients and customers do not see the touch of AI in a corporation's work.

For instance, a manufacturing company might use machine learning to collect and analyse an immense amount of data, then identify patterns and anomalies for which the company can use to make decisions about improving operations. As a customer of this manufacturing company, you would probably never see the use of this AI.

That is set to change with ChatGPT, which uses generative AI as a language model to answer questions and assist with tasks. Its uses now are varied: students might use it to write an essay, or a software engineer to code, a traveller to plan an itinerary, and some are already using it as a search engine. Companies are planning to jump on this bandwagon.

Forbes magazine reported the companies using ChatGPT to answer customer questions include Meta, Canva and Shopify.

Ada, a Toronto-based company that automates 4.5 billion customer service interactions, partnered with ChatGPT to further enhance the technology.

As part of its evolution, CNBC reported Microsoft is planning to release technology allowing large companies to launch their own chatbots using the ChatGPT technology. That's going to be billions of people interacting with ChatGPT.

It seems like a perfect partnership, a natural next step for the technology. But it cuts both ways.

Not everyone has jumped onto this tempting bandwagon. Some of the most AI-proficient organisations in the world are treading with caution, and for good reason.

As impressive as ChatGPT has proved so far, large language models (LLM) such as ChatGPT are still rife with well-known problems. They amplify social biases, often negatively against women and people of colour. They are riddled with loopholes -- users found they could circumvent ChatGPT's safety guidelines, which are supposed to stop it from providing dangerous information, by asking it to simply imagine it's a bad AI.

In other words, ChatGPT-like AI is fraught with reputational risk for businesses.

That doesn't mean we can totally dismiss AI such as ChatGPT. Adopting new technology of any sort is bound to come with risks.

How do we reap the benefits of AI while maintaining a healthy level of reputational risk?

The Reputation, Crisis and Resilience (RCR) team at Deloitte recently held a roundtable with leaders in financial services, technology and healthcare industries to discuss how they approach the complex challenge of managing reputation risk. Some of the points concluded were:

  • Foster a reputation-intelligent culture: Create a culture that is sensitive to brand and reputation. In every decision made, employees should have an internal compass that constantly asks: will this move the needle on the company's reputation, and how? This can be cultivated through holistic onboarding and training programmes.
  • Set a reputation risk tolerance: Setting a tolerance can help organisations make intentional decisions. No company wants to take a reputational hit, but few companies actually set tolerance levels for how much risk they want to take. When you have a threshold to stay within, it's easier to deal with new technologies you might not understand fully.
  • Utilise reputation risk management: Measurement methods include regular surveys, media monitoring and key opinion research. However, leaders must find a balance on collecting relevant data without drowning in it. Research shows that too much data collection can be counterproductive, distracting people from the bigger picture or creating a risk-averse attitude.

As AI continues to develop very quickly, knowing the intricate depths and breadths of AI will be difficult.

While we should keep abreast, what's more important is focusing on cultivating a strong mindset around reputational risk so that no matter the tool -- AI, social media, cryptocurrency -- we can always manage the risk involved.

For instance, instead of concentrating all effort and focus towards the dangers of a kitchen knife and how it might hurt you, learn about the general guidelines for personal kitchen safety, encompassing the sharp edge of a knife as well as a pan fire.

Similarly, instead of concentrating on the latest technological marvel and learning about every single reputational risk that might come with it, build a robust reputational mindset -- one that will help your organisation weather any risky business, and where any new technology can easily fit into the framework you've developed.


Conall McDevitt is managing partner of Europe and Asia for the Penta Group, a consultancy that focuses on fostering better understanding between businesses and their stakeholders

Do you like the content of this article?
COMMENT