Risks of unapproved AI tools in companies
risicos-van-niet-goedgekeurde-ai-tools-in-bedrijven
Published by
WINMAG Pro Editorial Team
Wed, 04 March 2026, 07:05
Share

The workplace seeks solutions outside the rules

AI is no longer a hype. It has become a daily reality in the workplace. But where technology is developing rapidly, the formal AI policy of many companies lags behind. According to the recent STEM Workforce report from SThree, 72% of Dutch tech professionals use AI tools that are not officially approved by their organization. Think of ChatGPT, Gemini, or Copilot: public platforms that find their way into operational processes without internal validation.

Why does this happen? Because it works. Employees overwhelmingly choose these tools for their speed, ease of use, and functionality that approved alternatives do not (yet) offer. But the downside is significant, especially in terms of security, compliance, and control.

The biggest pitfall: apparent efficiency

What starts as a productivity gain often ends in dependency. Almost a quarter of respondents in the survey indicate they cannot complete their work without unapproved AI solutions. If the AI is removed, work comes to a halt. The tool thus becomes mission-critical, without the organization consciously allowing or protecting it.

The real pitfall lies in invisibility. IT departments have no visibility into what is being used, what data is being shared, or how secure the storage is. This puts companies at unconscious risk of data breaches, reputational damage, and violations of laws and regulations such as the GDPR and soon the EU AI Act.

The risks: from privacy to policy damage

The risks are not hypothetical. 81% of professionals acknowledge that unauthorized AI use can jeopardize privacy or security. Yet the tools remain popular. Why? Because the official alternatives are often too slow, too limited, or poorly integrated into the workflow.

This creates a paradox. Companies invest in secure infrastructures, but their own staff bypass these because otherwise, the work simply does not get done. Without proactive policy, organizations unconsciously leave their data security dependent on tools they have not chosen, tested, or validated themselves.

How can companies bridge this AI gap?

The solution lies not in banning but in guiding. Companies must actively invest in approved AI tools that are indeed usable and powerful. This requires collaboration between IT, security, and business units. Additionally, there must be clear communication about risks, governance, and alternatives.

The upcoming EU AI Act provides companies with a framework to establish responsible AI policy. This requires not only technical but also ethical and organizational choices. Those who do not provide employees with clear guidelines force them into shadow work. And thus into risk.

AI without oversight is an open backdoor

Generative AI is powerful. But without policy, it is also vulnerable. Companies that ignore AI at the front end lose control at the back end. The future lies in controlled freedom: AI solutions that are tailored to real work processes and meet the requirements of IT, security, and compliance.

Now is the time to move from reactivity to management. Unapproved AI tools are no longer an incident. They are a symptom of a structural lack of good policy.

ai-in-it-teams-efficientie-vs-kwetsbaarheid

AI in IT-teams: efficiëntie vs kwetsbaarheid

Thursday 9 April 2026 - 18:30
wat-blijft-over-als-ai-alles-overneemt

What remains when AI takes over everything?

Monday 30 March 2026 - 19:50

Invisible IT: towards a self-operating future

Monday 30 March 2026 - 15:40
de-merkgedreven-chatbot-als-digitale-frontoffice

The brand-driven chatbot as a digital front office

Saturday 28 March 2026 - 21:50