Bluebik identifies AI-related trends
text size

Bluebik identifies AI-related trends

Listen to this article
Play
Pause
Mr Phiphat sees continuing investment of companies in AI, cybersecurity and software development this year.
Mr Phiphat sees continuing investment of companies in AI, cybersecurity and software development this year.

Artificial intelligence (AI) agentic workflow, AI governance, and disinformation security are key trends for 2025, according to Bluebik, a SET-listed tech consulting firm.

However, AI and machine learning are also fuelling a rise in disinformation, turning it into a "digital arms race", according to the company.

"We see continuing investment of companies in AI, cybersecurity and software development this year to be an engine of growth," said Phiphat Prapapanpong, director of advanced insights at Bluebik Group.

He foresees a trend of adoption of AI agentic workflow that leverages GenAI capability and integrates with business workflows.

AI agents can learn and perform complex tasks with high accuracy. Agentic workflow loops allow a system to monitor, self-learn, iterate on tasks, and produce efficient outcomes.

Unlike non-agentic AI, agentic AI can perform tasks and make decisions.

"This year marks the starting point for agentic AI, with large businesses, particularly banks, insurance companies, and retailers, beginning to pilot its use internally. By 2026, it is expected to become more widespread and move into production or real deployment," said Mr Phiphat.

He views that the organisations that pilot agentic AI aim to increase the productivity of their employees without the need to recruit new staff.

"Most AI agents are designed to reduce repetitive tasks, giving employees more time for valuable and innovative work, although some positions may be replaced."

Mr Phiphat said agentic AI will not replace or bypass software development, but it will increase the demand for AI developers who design, build and implement AI systems and applications.

"We can expect that, in addition to internal use of agentic AI within organisations, software companies will develop 'out-of-the-box' AI agents as a new business opportunity," Mr Phiphat noted.

Moreover, there will be a rise of AI governance platforms to assist overseeing and regulating AI systems to ensure their responsible and ethical usage.

They would enable IT leaders to guarantee that AI is dependable, transparent, fair and accountable, while also adhering to safety and ethical guidelines.

This alignment ensures AI supports each organisation's values and meets broader societal expectations.

Mr Phiphat also sees the emergence of disinformation security, noting that AI and machine learning are making disinformation more advanced, turning it into a digital arms race.

Techniques like phishing, hacktivism, fake news and social engineering are being used to spread fear, chaos and fraud, according to the company.

As these technologies become more accessible, the threat to businesses will increase, creating long-term risks if not addressed, it notes.

Gartner predicted that by 2028, 50% of enterprises will adopt products, services or features specifically to address disinformation security use cases, up from fewer than 5% in 2024.

Mr Phiphat added that US President Donald Trump is taking steps to deregulate AI and this policy would enable AI developers to create more diverse use cases, while investors are eager to invest in AI.

However, this also presents challenges regarding the responsible use of AI, particularly in terms of disinformation security, where adversaries use tools to create voice cloning and deepfakes.

"AI regulations or guidelines in many countries will include penalties for misuse, and tech defenders will launch AI detection tools to prevent criminal activity and identify deepfakes or content that uses AI to manipulate real content," Mr Phiphat said.

Do you like the content of this article?
COMMENT (2)