Cybercriminals using AI for attacks
text size

Cybercriminals using AI for attacks

Cyber-attacks will increasingly use artificial intelligence (AI) to speed up their post-exploitation activities at targeted organisations, according to Palo Alto Networks.

AI has transformed the threat landscape, expanding the speed, scale and sophistication of cyber-attacks, Steven Scheurmann, the US-based cybersecurity firm's regional vice-president for Asean, told the Bangkok Post.

The low-hanging fruit for attackers is to use AI chatbots to craft more realistic phishing emails with fewer obvious errors, he said.

With AI, it is easier to create deepfakes, opening the door to misinformation or propaganda campaigns, said Mr Scheurmann. For example, a multinational firm's Hong Kong office lost HK$200 million after scammers staged a deepfake video meeting.

"We see signs that bad actors are using AI to attack organisations on a larger scale," he said.

Using AI makes it less expensive and faster to execute numerous simultaneous attacks aimed at exploiting multiple vulnerabilities, said Mr Scheurmann.

AI can also speed up post-exploitation activities such as lateral movement and reconnaissance. Lateral movement is a technique used after compromising an endpoint to extend access to other hosts or applications in an organisation.

Much has been made of the potential for AI-generated malware. The company's research suggests AI is more useful to attackers as a co-author than as the sole creator of new malware.

Attackers can use AI to assist with the development of specific pieces of functionality in malware. However, this usage still often requires a knowledgeable human operator, he said. The technology may also make it possible for attackers to develop new malware variants quicker and cheaper, said Mr Scheurmann.

Organisations need to leverage AI to catch up with cybercriminals, he said. Palo Alto Networks uses AI to bolster its security, detecting 1.5 million new attacks daily, said Mr Scheurmann.

Organisations can apply AI in their own security operation centres. According to a 2024 report from the firm's threat intelligence arm Unit 42, more than 90% of the centres are still dependent on manual processes.

He said AI is particularly effective at pattern recognition, so cybersecurity threats that follow repetitive attack chains could be stopped earlier.

Groups developing AI models can take steps to prevent threat actors from misusing their AI creations, said Mr Scheurmann. By controlling access to their AI models, threat actors can be prevented from co-opting them freely for nefarious purposes.

AI designers should be aware of the potential to jailbreak large language models by convincing them to answer questions that could contribute to bad behaviour, he said.

AI designers should consider that attackers will ask AI things like, "How do I increase the impact of an attack on a vulnerable Apache web server?" AI models should be hardened against such lines of questioning, said Mr Scheurmann.

Organisations should make an effort to secure users accessing AI tools, ensuring visibility and control over how these services are being used within an enterprise, he said. Clear policies are needed for the type of data users can feed into AI services, protecting proprietary or sensitive information from exposure to third parties, said Mr Scheurmann.

He said consolidating security solutions into a unified platform is crucial for organisations to improve operational efficiency, enhance their security posture and effectively address evolving threats.

Do you like the content of this article?
COMMENT (2)