How cyber risk is changing in 2026 — and what partners need to know

AI, identity protection and human risk are changing the cyber battleground.

Panorama of night city skyline with immersive data protection interface with padlock, fingerprint and shield. Concept of cybersecurity and biometric scanning

Image:
Getty Images

Heading into 2026, cyber threats are evolving as AI, identity abuse and digital deception drive faster, more targeted attacks, according to security research.

For partners, demand is likely to grow for managed services as organisations face the challenge of balancing innovation with vigilance.

The fundamentals are still the focus, said Fred Thiele, CISO, Interactive, but the attack surface is shifting in ways that demand fresh thinking.

“Cyber risk isn’t static - it evolves with behaviour and technology,” he explained.

1. Identity is emerging as the frontline of attacks

Identity is becoming a primary attack surface, with attackers using stolen credentials, session hijacking and token abuse to infiltrate systems, according to security firm Flashpoint.

This makes intrusions harder to detect and, compounding the risk, non-human identities such as AI agents are often over-privileged and poorly monitored.

As a result, identity threat detection and response (ITDR) will become increasingly important to spot misuse of valid access, while the adoption of phishing-resistant MFA such as passkeys will need to be accelerated, according to cyber consultancy, Optiv.

“Identity is the heart of everything. If you have strong controls around identity, you solve most issues,” Thiele said.

2. AI-based attacks are maturing

Attackers are moving from using AI to deploying AI-native and agent-driven attacks, including hijacking enterprise agents, according to multiple security firms.

Agents can carry out reconnaissance, exploitation and data exfiltration with minimal human oversight, reducing the time between compromise and impact.

AI also enables attacks to adapt in real time, rewriting malware, adjusting tactics and evading static detection, according to Barracuda.

Thiele agrees, noting that AI and advanced automation help SOC analysts triage events, “but it also enables malicious actors to find and exploit vulnerabilities faster than ever”.

To mange the risk, stronger governance, observability and training are required, Optiv recommends, alongside human oversight.

3. Deepfakes shift to professional attacks

Deepfakes are becoming professional fraud tools, particularly for business email compromise (BEC) and bypassing identity verification, according to several security firms.

Thiele warns that as video generation improves “the risk of fraud skyrockets”.

These attacks enable fraudulent payments, socially engineered MFA resets and fake IT or customer support interactions. They’re increasingly combined with contextual insider knowledge scraped from social platforms, according to Flashpoint.

4. Browsers-based attacks on the rise

Browsers are poised to overtake email as the most exploited phishing entry point.

Browser-level SaaS attacks are a natural evolution as organisations move more workloads online, Thiele said.

Adding to the risk, as AI assistants move into browsers, the attack surface will widen, Red Canary predicts.

Browsers are vulnerable to poisoned search results, fake Captcha prompts and malicious payloads, while AI-generated content means browser-based lures are becoming indistinguishable from legitimate sites.

Traditional endpoint and operating system security controls don’t typically capture browsers, creating a visibility gap that requires tighter identity controls, conditional access and system-level resilience — not awareness alone, Red Canary noted.

5. Human risk is reframed as AI enters the picture

One-size-fits-all annual security training may be overhauled in favour of behavioural risk intelligence and personalised interventions. In addition, security training will need to incorporate human–AI collaboration.

"AI introduces new risks: it is another avenue for data loss, and there are issues around trust, and overreliance,” Thiele explained.

At the same time, Living Security suggests AI will increasingly act as a safety net, guiding users and quietly correcting mistakes rather than policing them.

Highlights