Security at core of Microsoft's AI shift
text size

Security at core of Microsoft's AI shift

Firm touts Secure Future Initiative

Listen to this article
Play
Pause
Ms Jakkal said the core of AI transformation is empowering employees with AI tools that improve productivity.
Ms Jakkal said the core of AI transformation is empowering employees with AI tools that improve productivity.

Microsoft is underscoring its "Secure Future Initiative" to prepare for an increasing scale of increasingly high-stakes cyber-attacks amid artifical intelligence (AI) transformation.

"In the age of AI, what we need to do is become an AI company. This shift involves not just adopting AI technologies but also integrating them deeply into business operations to drive efficiency and growth," said Cally Chan, general manager of Microsoft Hong Kong and Macau, during a Microsoft AI Tour event in Hong Kong on Jan 16.

The core of this transformation is empowering employees with AI tools that improve productivity, a concept central to the rise of agentic AI systems that can autonomously perform tasks and support decision-making, Ms Chan said.

"AI adoption must start with security at its core," said Vasu Jakkal, corporate vice-president of Microsoft Security.

If it were compared to a nation state, cybercrime would be the third largest country by GDP with US$9 trillion in 2024 and the figure is expected to reach $13 trillion by 2028, according to Statista.

Cyberthreats are more complex than ever. There are 7,000 passwords attacked every second and there was a 275% increase in human operated ransomware attacks from 2022 to 2024, while the number of victims of data breaches has continued to grow, reaching 1 billion victims, according to Ms Jakkal.

Secure Future Initiative

Microsoft has made security its top priority, with chief executive Satya Nadella declaring the Secure Future Initiative in November 2023. The initiative places security first in its AI development.

The initiative emphasises building AI products that have security by default and by design, with secure operations all around.

Microsoft is focused on ensuring that data used in AI models is accurately labelled and securely handled, preventing inadvertent data loss or misuse, Ms Jakkal said.

Moreover, data governance is also important, particularly in ensuring that data used by AI tools is properly classified.

Microsoft is investing $20 billion over the next five years to bolster cybersecurity.

The company tracks over 1,500 unique threat actors and financial crime actors, using advanced signal intelligence to prevent billions of attacks daily.

"Everyday, we see 78 million [threat] signals, which we use to protect businesses from tens of billions of attacks. We prevented 70 billion attacks in a year," said Ms Jakkal.

This proactive approach is not just about defending against attacks but also gaining better insights into attacker behaviour to enhance detection and response, she added.

Currently, there are 4.6 million unfilled cybersecurity positions worldwide, with about 2.6 million in Asia. To reduce this shortfall, Microsoft has consolidated its end-to-end security that helps organisations detect and respond to threats more efficiently.

She added that Microsoft has used generative AI (GenAI) to solve security issues by introducing Security Copilot in April 2024, which is a GenAI tool designed to assist security teams in detecting and mitigating threats.

"We believe Security Copilot will fundamentally change the way we approach security."

Studies show that security professionals using this tool are able to work 26% faster and 35% more accurately when compared to other tools.

"GenAI will be the most successful -- and most controversial -- technology of our lifetime, especially in the realm of security."

AI driven emerging threat

Ms Jakkal said adversaries will use GenAI in many ways, ranging from malware generation and automated vulnerability discovery to making deepfakes in data, email or voice. AI is helping attackers automate and personalise these threats, making them more dangerous and sophisticated, Ms Jakkal said.

However, despite these emerging threats, through AI-powered tools, defenders can better identify and respond to threats, staying ahead of attackers who are also adopting AI.

To further bolster security, Microsoft has developed tools such as Purview for data security and Defender for threat protection, ensuring that even the AI systems are suitably secure.

Ms Jakkal said the future of cybersecurity appears to be increasingly agent-driven. Looking ahead, the potential for "agentic" innovations is significant, particularly for security.

In this future landscape, agents could take on specialised security tasks. For example, one agent could be focused on data security, another on security operations, and yet another on managing identity security.

These agent-driven systems would automate and streamline various cybersecurity functions, providing targeted, efficient protection, Ms Jakkal said.

Do you like the content of this article?
COMMENT