ShadowAI

The silent spread of shadow AI

What are the risks of unmanaged AI use and how can businesses mitigate them?

Words: Helena Vallely Illustration: Michał Bednarski

‘Shadow AI’ sounds like something out of a dystopian novel, suggesting some kind of uncanny presence with unknown motives lurking at the periphery.

In reality, the definition of shadow AI – the unauthorised or unmanaged use of artificial intelligence tools at work – is a little less mysterious. However, it does have the potential to significantly impact businesses’ cyber security and data integrity.

It is a widespread problem, with a recent study from Owl Labs finding that almost half (49%) of UK employees surveyed admitted to using AI tools at work of their own volition.

They do this despite knowing that their AI choices could be risky, with 72% of employees recognising cyber security and 70% data governance as potential pitfalls, according to research by Software AG. However, these employees find AI tools so valuable that 46% say they would refuse to give them up, even if their organisation banned them completely.

AI is increasingly a part of our lives, both work and personal, and it’s no wonder that employees are embracing it to augment their workday. Clare Elliott FMAAT, CFO of NetSupport, says: “We’re all being encouraged to use it, learn it and embrace it, otherwise we’ll be left behind, which creates a fear factor.”

However, the Software AG research revealed that few employees take adequate precautions when using personal AI at work, such as running security scans (27%) or checking data usage policies (29%).

We spoke with experts to examine the risks of shadow AI to businesses, and some of the practical ways business leaders and finance teams can mitigate them.

When IT lacks visibility and control, security suffers.

Lack of visibility and transparency

Shadow AI introduces unapproved technologies that can create vulnerabilities and undermine data integrity. For example, employees downloading external software onto company systems could compromise company data, introducing viruses on an individual’s device that could go on to infiltrate company systems.

Boris Cipot, senior security engineer at Black Duck, says: “When IT lacks visibility and control, security suffers.”

If teams start losing oversight of how data is used or where it is going, the chances of regulatory violations and security breaches rise sharply. And it only takes one incident such as a data leak or a compliance failure for financial and reputational damage to follow.

Cipot adds: “These incidents often make headlines and the fallout can be severe. Without proactive oversight, managing risks becomes reactive and costly.”

Working in the shadows


of UK employees admitted to using unapproved AI tools at work.

Source: Owl Labs

of AI users say they encounter hallucinations ‘very or fairly often’.

Source: YouGov

of employees recognise the cyber security risks of their choices.

Source: Software AG

of employees say they would refuse to stop using unmanaged AI tools.

Source: Software AG

Misinformation

AI systems, especially less regulated ones, can generate false or misleading information known as hallucinations. It is relatively prevalent, with almost a quarter (23%) of AI users reporting that they encounter hallucinations very or fairly often, according to YouGov.

This false information can cause potential issues, with 60% of respondents in a KPMG UK study citing inaccuracy and hallucinations as the biggest concern when adopting generative AI (GenAI) into business processes.

Cipot says hallucinations can lead to poor decision-making based on inaccurate data. Over time, this can also corrupt wider company databases, spreading misinformation throughout the organisation.

For example, finance teams may find that AI hallucinations produce incorrect figures or misinterpret financial data, leading to inaccuracy in reports or forecasts. This can have serious consequences and even damage a business’s reputation.

“To avoid this, it’s critical for employees to use only vetted AI tools provided by IT that ensure data protection, regulatory compliance and secure usage,” says Cipot.

Illustration showing a man holding up a shield in front of him. It is casting a shadow revealing holes he didn't know were there, meaning it offers much less protection than he realised.

Compliance issues

Feeding sensitive data such as customer details, financial records or proprietary code into AI tools can lead to data leaks or exposure of intellectual property. This, in turn, can have regulatory implications, putting businesses at risk of penalties or sanctions for violating data protection regulations and laws.

“Many users don’t realise that public AI platforms may store input data and use it for training purposes, which can breach regulations like GDPR or HIPAA,” says Cipot. “Additionally, if data is processed by AI systems hosted in other countries [such as the US or China], there’s a risk of violating international data transfer laws, particularly for organisations in the EU.”

This unauthorised usage can also threaten certifications such as ISO 27001 – the standard for information security management systems – as organisations must be able to monitor and control data flows to maintain compliance. “Losing such certification could directly impact a company’s reputation and revenue,” adds Cipot.

No proactive communication

Shadow AI operates without proper communication or oversight from management, meaning teams may not be aware of how data is handled or what risks are involved. This lack of transparency can lead to security gaps and operational inefficiencies.

Elliott says: “The combination of everyone being encouraged to access AI tools, and some employers not having software policy approvals in place, will inevitably mean some employers are unaware of what software is installed on company-owned devices.”

Many employees will be aware that downloading software typically requires approval from their IT department, but others may not – and some companies may not enforce such a policy at all. This leaves them vulnerable to employees freely accessing and installing potentially risky software.

Elliott adds: “If businesses are more proactive, they will then be more protected in the long term.”

What can organisations do to mitigate the risks?

“AI is now everywhere and avoiding it isn’t realistic,” says Cipot. Instead of burying their heads in the sand, he says finance teams should work with other departments across the business to develop a robust and comprehensive AI strategy. This should include a clear outline of how AI tools can be used, which tools are approved, the type of tasks they can be applied to and under what conditions.

“Prevention and detection must work together,” adds Cipot. Businesses should deploy systems such as internet traffic monitoring and data loss prevention (DLP), which helps to identify and prevent the sharing or use of unsafe data, to catch mistakes or misuse before they escalate.

He also recommends educating employees about the risks to help them not only understand the rules but why those rules exist, to build compliance from the ground up.

For Elliott, finance teams need to be proactive in facilitating the change and guiding employees. Organisations should assess which programmes are going to serve their business well, implement and embed them properly, and facilitate user accounts for employees themselves.

It’s important that the whole company collaborates closely to implement the AI policy, rather than trying to work in silos. “Finance should be budgeting for these changes and IT departments should be performing due diligence to provide employees with access to software and tools that will benefit the company,” adds Elliott.

With AI now available to anyone, it is incumbent on business leaders and finance teams to protect themselves from the security and data risks posed by unauthorised use. Putting clear, practical rules in place is the first step they need to take to ensure AI use is responsible and keep the risks to a minimum.

Are there any benefits to shadow AI?

For Elliott, while employees seeking out their own AI tools could push employers to more seriously consider implementing company-wide AI use, a lack of proper support and procedures makes this a risky strategy.

Cipot agrees, noting that while it may indirectly highlight process gaps, the risks far outweigh any possible insights that companies can get from shadow AI.

Security, IT and compliance teams are already under pressure dealing with known threats, and unapproved AI use introduces unknown vulnerabilities into an already complex and strained environment. “What these teams need is organisational support, not more obstacles,” says Cipot.

To succeed in a regulated, AI-driven world, Cipot says companies must adopt secure, compliant solutions. “Securing critical software is non-negotiable and business innovation must be built on trust, not hidden tools or unauthorised shortcuts,” he says.

The Association of Accounting Technicians. 30 Churchill Place, London E14 5RE Registered charity no.1050724. A company limited by guarantee (No. 1518983).

Back to the top
Back to the contents
Back to section
Illustration showing a woman working at her laptop. It is casting its light on her, and her shadow is distorted into an intimidating, monstrous figure.

The silent spread of shadow AI

What are the risks of unmanaged AI use and how can businesses mitigate them?

The silent spread of shadow AId

What are the risks of unmanaged AI use and how can businesses mitigate them?