GenAI shaking up the entire IT landscape
text size

GenAI shaking up the entire IT landscape

Pros and cons of AI adoption now at the heart of corporate strategic decisions

TECH
Listen to this article
Play
Pause
(Photo: 123RF)
(Photo: 123RF)

Generative artificial intelligence (GenAI) is already beginning to affect areas where most people would assume only humans can have lasting impact. It is a trend that underpins many of the top IT predictions for 2025 and beyond, according to the market research firm Gartner Inc.

"It is clear that no matter where we go, we cannot avoid the impact of AI," said Daryl Plummer, Distinguished VP Analyst, Chief of Research and Gartner Fellow. "AI is evolving as human use of AI evolves. Before we reach the point where humans can no longer keep up, we must embrace how much better AI can make us."

Among the top trends forecast by Gartner:

Through 2026, 20% of organisations will use AI to flatten their organisational structure, eliminating more than half of current middle management positions.

Organisations that deploy AI to eliminate middle management human workers will be able to capitalise on reduced labour costs in the short term and benefits savings in the long term. AI deployment will also allow for enhanced productivity and increased span of control by automating and scheduling tasks, reporting and performance monitoring for the remaining workforce. This allows remaining managers to focus on more strategic, scalable and value-added activities.

AI adoption will present challenges, such as the wider workforce feeling concerned over job security, managers feeling overwhelmed with additional direct reports, and remaining employees being reluctant to change.

By 2028, technological immersion will impact populations with digital addiction and social isolation, prompting 70% of organisations to adopt anti-digital policies.

Gartner predicts that by 2028, about 1 billion people will be affected by digital addiction, which will lead to decreased productivity, increased stress and a spike in mental health disorders such anxiety and depression. Additionally, digital immersion will negatively impact social skills, especially among younger generations that are more susceptible to these trends.

"The isolating effects of digital immersion will lead to a disjointed workforce, causing enterprises to see a significant drop in productivity," said Mr Plummer.

"Organisations must make digital detox periods mandatory for their employees, banning after-hours communication and bring back compulsory analogue tools and techniques like screen-free meetings, email-free Fridays, and off-desk lunch breaks."

By 2029, 10% of global boards will use AI guidance to challenge executive decisions that are material to their business.

AI-generated insights will have far-reaching impacts on executive decision-making and will empower board members to challenge executive decisions. This will end the era of maverick CEOs whose decisions cannot be fully defended.

"Impactful AI insights will at first seem like a minority report that doesn't reflect the majority view of board members," said Mr Plummer. "However, as AI insights prove effective, they will gain acceptance among executives competing for decision support data to improve business results."

By 2028, 40% of large enterprises will deploy AI to manipulate and measure employee mood and behaviour, all in the name of profit.

AI has the capability to perform sentiment analysis on workplace interactions and communications. This provides feedback to ensure that the overall sentiment aligns with desired behaviours, which will allow for a motivated and engaged workforce.

"Employees may feel their autonomy and privacy are compromised, leading to dissatisfaction and eroded trust," said Mr Plummer. "While the potential benefits of AI-driven behavioural technologies are substantial, companies must balance efficiency gains with genuine care for employee well-being to avoid long-term damage to morale and loyalty."

By 2027, 70% of new contracts for employees will include licensing and fair usage clauses for AI representations of their personas.

Large language models (LLMs) that emerge have no set end date, which means employees' personal data that is captured by enterprise LLMs will remain part of the LLM not only during their employment, but after their employment.

This will lead to a public debate that will question whether the employee or employer has the right of ownership of such digital personas, which may ultimately lead to lawsuits. Fair use clauses will be used to protect enterprises from immediate lawsuits but will prove to be controversial.

By 2027, 70% of healthcare providers will include emotional-AI-related terms and conditions in technology contracts or risk billions in financial harm.

The increased workload of healthcare workers has resulted in workers leaving, an increase in patient demand and clinician burnout rates, which is creating an "empathy crisis". Using emotional AI on tasks such as collecting patient data can free up healthcare workers' time to alleviate some of the burnout and frustration they experience.

By 2028, 25% of enterprise breaches will be traced back to AI agent abuse, from both external and malicious internal actors.

New security solutions will be necessary as AI agents significantly increase the already invisible attack surface at enterprises. This increase will force enterprises to protect their businesses from savvy external actors and disgruntled employees to create AI agents to carry out nefarious activities.

"It's much easier to build risk and security mitigation into products and software than it is to add them after a breach," said Mr Plummer

By 2028, 40% of chief information officers will demand "Guardian Agents" be available to autonomously track, oversee or contain the results of AI agent actions.

Interest in AI agents is growing, but as a new level of intelligence is added, new GenAI agents are poised to expand rapidly in strategic planning. "Guardian Agents" build on the notions of security monitoring, observability, compliance assurance, ethics, data filtering, log reviews and a host of other mechanisms of AI agents. Through 2025, the number of product releases featuring multiple agents will rise steadily with more complex use cases.

"In the near-term, security-related attacks of AI agents will be a new threat surface," said Mr Plummer. "The introduction of guardrails, security filters, human oversight, or even security observability are not sufficient to ensure consistently appropriate agent use."

Do you like the content of this article?
COMMENT