HarfangLab warns of new AI-driven cyber risks
harfanglab-waarschuwt-voor-nieuwe-ai-gedreven-cyberrisicos
Published by
WINMAG Pro Editorial Team
Mon, 23 March 2026, 04:25
Share

Disinformation and manipulation of AI models are increasing

Due to the rapid rise of open-source AI models and the growing use of large language models (LLMs) for information provision, it is becoming increasingly difficult to distinguish fact from fiction. Malicious actors are using AI to produce convincing fake content, from text and images to audio and video.

Additionally, HarfangLab points to the risk of so-called data poisoning. Research shows that minimal adjustments in training data can be sufficient to structurally influence the behavior of a model. This increases the risk of manipulated output, impacting decision-making and research.

'Trust in AI tools can only exist if there are technological safeguards, transparency, clear rules, and independent controls,' says Pierre Delcher, Head of Threat Research at HarfangLab. 'If one of those pillars is missing, small vulnerabilities can grow into systemic risks.'

AI as an attack vector in the software supply chain

Another concerning development is the targeted manipulation of code generated by large language models. Research shows that attackers can influence LLMs to propose unsafe or malicious code to developers. The AI assistant thus becomes part of the attack itself.

Instead of exploiting a specific vulnerability in existing software, the attack shifts to the development process itself. When manipulated code is incorporated into applications or software libraries, the impact can spread throughout the entire software supply chain. Just like with previous large-scale supply chain attacks, this can affect thousands or even millions of end users.

Hyper-personalized and scalable phishing

Phishing remains the most visible AI-related threat, but its nature is fundamentally changing. Where personalized fraud was previously manual and time-consuming, AI enables this process to be automated and refined on a large scale.

Attacks are no longer limited to email and SMS (phishing and smishing). Voice is also being used through AI-generated voices (vishing), where attackers can convincingly impersonate colleagues, supervisors, or suppliers. With the rapid improvement of content generation, the use of deepfakes is also becoming more accessible, making online scams more credible and harder to recognize.

According to HarfangLab, phishing is thus evolving into a polymorphic model: messages are automatically tailored to the specific target, context, and even previous interactions. This not only increases the likelihood of success but also makes detection by traditional filters significantly more complex.

Read also: Malware threats that will remain under the radar in 2026

From cyber incident to physical damage

AI systems are increasingly being integrated into operational environments, such as energy management, logistics, industrial automation, and vehicle systems. By the end of 2025, several organizations had already reported risky behavior from autonomous AI agents, including unauthorized access to systems and unintended data exposure.

According to HarfangLab, the risk is increasing as these AI agents directly control processes in the physical world. When such systems are manipulated, for example through indirect prompt injection or manipulated data, it can lead to tangible disruptions. Think of misdirected industrial robots, disrupted transport lines, or logistics systems routing cargo incorrectly.

The further rise of so-called 'physical AI', including autonomous warehouse systems and consumer-oriented robotics, reduces the distance between digital instruction and physical action. This also increases the chance that the misuse of AI not only causes digital damage but has a concrete impact on infrastructure and business processes.

Read also: Ransomware in retail: why 58% still pays

AI in defense: from detection to assistant

At the same time, AI also strengthens the defense side. Machine learning and deep learning have been used for years in detection engines to identify anomalous behavior and unknown threats. What has become visible since 2025 is a shift towards generative AI as an active assistant to security analysts.

Due to the increase in professional and automated attacks, the number of alerts that security teams need to assess is growing. AI is therefore being used to prioritize alerts, translate technical detection rules into understandable language, generate playbooks, and automate incident response tasks. AI also supports threat intelligence in summarizing attack campaigns and correlating sources, allowing analyses to be quickly translated into concrete actions.

According to Hugo Michard, AI Lead at HarfangLab, AI is thus definitively part of the playing field. 'AI is present on both sides: with attackers and with defenders. This not only requires technological innovation but also transparency, clear use cases, and compliance with regulations such as the AI Act.'

Read also: Almost half (44%) of organizations cite cybersecurity as a priority in video investments

het-cms-evolueert-waarom-intentie-de-nieuwe-interface-wordt

The CMS is evolving: why intention is becoming the new interface

Tuesday 31 March 2026 - 14:45
tot-leven-gebracht-meesterwerk-wint-ai-kunst-competitie-google

Masterpiece brought to life wins Google AI art competition

Tuesday 31 March 2026 - 01:05
asus-republic-of-gamers-strix-laptoplijn-keert-terug-met-de-nieuwste-intel-core-ultra-9-290hx-plus

ASUS Republic of Gamers Strix laptop line returns with the latest Intel Core Ultra 9 290HX Plus processors

Monday 30 March 2026 - 03:35
nieuwe-automatiseringstool-moet-ai-bruikbaar-maken-in-gesloten-zorgsystemen

New automation tool aims to make AI usable in closed healthcare systems

Sunday 29 March 2026 - 06:20