Real world ready | Next Gen accounting

Spotting deepfakes

Frauds involving AI-generated voices, videos and images are a growing threat to all businesses

Words Jessica Bown

AT A GLANCE

1

AI has the potential to make life easier for accountants in many ways, but the increasing sophistication of this technology is also creating problems for the finance sector.

2

One such issue is the increasing prevalence of AI-generated deepfake frauds. For the companies taken in by these elaborate scams, the cost can be devastating.

3

Criminals can use AI to imitate vocal patterns and create both static and moving images, making these frauds highly convincing and difficult to spot.


Accountants and finance professionals are often the targets of deepfake frauds due to the level of access they have to company funds.

In fact, figures suggest that such incidents in the financial sector jumped by 700% in 2023, with further growth seen in 2024 (Law Society, 2024).

What are deepfake frauds?

So, it’s safe to say that the ability to work with AI – and generative AI (GenAI), which creates new content from existing data – looks set to become a crucial element in every accountant’s skillset in the years to come.

And as an AAT student, now is an optimum time to get on board and boost your job prospects by learning more about AI and GenAI tools.

Here, we take a closer look at some of the ways you can expect to use AI in your accounting career.

£20m deepfake fraud

In 2024, British engineering company Arup lost millions of pounds after AI fraudsters created a fake video conference apparently organised by the company’s chief financial officer.

According to police investigating the fraud, a finance team member received an invitation to join the video conference to discuss a “confidential transaction”.

During the conference, they were then convinced to make a series of bank transfers to the fraudsters’ accounts in Hong Kong. The total amount transferred is believed to have been in the region of £20m.

How common are attacks?

The Arup incident is far from the only occasion where deepfakes have been used to impersonate business executives.

Falsified images of Tesla billionaire Elon Musk have been circulated to encourage people to invest in cryptocurrency, while advertising giant WPP’s chief executive officer Mark Read was targeted by fraudsters who created a WhatsApp account using a publicly available image of him and set up a Microsoft Teams meeting that appeared to be with him and another senior WPP executive. Happily for WPP, this particular corporate scam was spotted before any damage could be done.

However, there are growing fears around how deepfake frauds could cause problems on a much wider socioeconomic level.

In 2022, fraudsters attempted to influence the war in Ukraine by circulating a deepfake video of the country’s president Volodymyr Zelensky supposedly telling his troops to surrender.

The video was quickly outed as a fake, but there’s little doubt that deepfakes can and are being used to manipulate political situations around the world – a threat highlighted by former US president Joe Biden in a State of the Union address

What are the laws?

The speed of recent advances in AI technology has left lawmakers rushing to catch up with the new trend for deepfake scams and frauds.

But AI regulations are now starting to come into force.

In the European Union, for example, the AI Act establishing both rules around artificial intelligence use and penalties for those who fail to comply became law on 1 August 2024.

In the UK, the Online Safety Act will require regulated services such as social media firms and search engines to assess the risk of illegal content or activity – including many types of deepfake content – and take steps to prevent it and remove it quickly when made aware of it.

How can businesses protect themselves?

Awareness is key to preventing deepfake frauds, especially as they can be so hard to detect. Ofcom’s research also revealed that just one in 10 consumers feel confident in their ability to spot a deepfake fraud.

Therefore, employee training on how to recognise and react appropriately to suspected attacksshould be a priority for all companies.

Other steps recommended by The Law Society include:

  • Simulating deepfake attacks to test staff training, identify vulnerabilities and improve response strategies;
  • Implementing strong authentication measures, such as multi-factor authentication and conditional access, to reduce the risk of security breaches;
  • Introducing a layered defence strategy that includes additional safeguarding measures and alerting mechanisms to stop successful attacks as quickly as possible;
  • Regularly assessing and auditing existing security measures to ensure they remain up to the job.

TOP TIPS

How to spot deepfakes

1

Signs a video is a deepfake include inconsistencies such as:

  • Time lags between speech and mouth movements;
  • Unnatural body movements;
  • Unexpected reflections in eyes and glasses.

2

Signs an image is a deepfake include:

  • Deformed hands or missing/extra fingers;
  • Distorted patterns;
  • A strange, often airbrushed, texture.

3

Signs audio content is a deepfake include:

  • A flat speaking voice;
  • Slurred speech;
  • Unusual background noises.

RESOURCES

1

Law Society, 2024. A deep dive into deepfakes.

2

Ofcom, 2024. A deep dive into deepfakes that demean, defraud and disinform.

Back to the top
Back to contents
Back to start

The Association of Accounting Technicians. 30 Churchill Place, London E14 5RE. Registered charity no.1050724. A company limited by guarantee (No. 1518983).

Image showing someone in a yellow jumper using a tablet, with a digital face surrounded by symbols hovering above the screen - symbolising AI

Spotting deepfakes

Frauds involving AI-generated voices, videos and images are a growing threat to all businesses

Words Jessica Bown