Artificial intelligence may drive us up the wall

Artificial intelligence may drive us up the wall

Increasing automation could aggravate problems that already exist, say data scientists.

Data scientist Anthony Scriffignano says fintech crime does not need to be particularly sophisticated.
Data scientist Anthony Scriffignano says fintech crime does not need to be particularly sophisticated.

If an autonomous car drives into a wall and injures its passengers, who is responsible? As devices start claiming increased autonomy, we face moral, legal and physical risks for which we don't even have names yet.

Perhaps we should start being more realistic about the prospects of a world dominated by artificial intelligence (AI) and machine learning.

"Look, just because something is new and shiny, it doesn't mean it will solve existing problems or create new ones," says Anthony Scriffignano, chief data scientist at commercial data and analytics firm Dun & Bradstreet.

There's no guarantee that even if these technologies are adopted we will be better off today. "When you have new ways of doing things, you have new ways of cheating or at least people trying to find new ways to cheat," Mr Scriffignano says.

"Imagine you receive a payment with a stolen credit card. There is more of trail where that transaction happened if it occurred over the internet, than if someone just walked up to your business and used your square terminal."

Fintech crime doesn't always have to be sophisticated, he says: "Someone sees on Facebook that you are about to go on vacation, and he or she orders something they never intend to pay for to your house, and picks it up from your porch. The company that has not been paid will come after you."

As technologies evolve, so does the complexity of crimes they enable. The situations described above make crimes harder to track down and prosecute, but the legal frame to deal with them is already in place -- they are old crimes in a new guise.

"We have laws that are very clear on human agency," Mr Scriffignano says. "If I hire you to deliver dynamite and you blow up a building, then I am responsible, because I paid you to deliver the dynamite."

Today, most machines make decisions in ways that can be directly attributed to designers and programmers, so guilt can comfortably be laid at the doorstep of device designers.

"The first generation of fully autonomous machines, perhaps driverless cars, will not be tools used by humans, but machines deployed by humans that will act on information the machine acquires and analyses, and will often make decision in environments not anticipated by its creators," says David C Vladeck, a professor at the Georgetown Law Center.

Machines don't get drunk, and they can spot moving objects at speeds humans can't match, but "factors beyond the machine's control virtually guarantee that at some point the car will have an accident that will cause injury of some kind, and will act in ways that are not necessarily ordained by their programming", Prof Vladeck says.

The legal framework is not always so clear. Who should claim responsibility for the lives of passengers in self-driving vehicles? The designer, the data supplier, or both? What about the car itself, an entity capable of changing its goals, and seeing them through?

Mr Scriffignano offers an even simpler example: "I send a bot to do something, and the chatbot learns language that is considered racist and uses that language against somebody, am I responsible? It's not black and white."

A case like this may hit the courts sooner than we realise. "We have refrigerators that order things from Amazon, soon we will have devices chatting with each other and saying I would like to do business with you," Mr Scriffignano says.

In January, for example, a TV news segment set off a number of Amazon Alexa devices, and led them to accidentally order doll houses. A report of a six year-old girl ordering a doll house through Alexa, triggered a handful of the personal assistant devices into ordering the toy.

Do you like the content of this article?
COMMENT