I'm a hacker...here are five ways scammers use AI to access your data

by · Mail Online

A hacker has revealed how cyber criminals are using artificial intelligence to clone people's voices and steal thousands of pounds.

Dr Katie Paxton-Fear is a cybersecurity lecturer at Manchester Metropolitan University and also an 'ethical hacker' who 'hacks companies before the bad guys do'.

She has partnered with Vodafone Business on a new campaign to drive awareness of the rising threat of AI phishing scams on the UK business sector.

New research from the company suggests that office juniors are putting their workplace more at risk of AI phishing attacks than any other age group.

The study highlighted an 'age gap' in awareness – with younger staff aged 18 to 24 appearing more likely to fall for the new breed of AI phishing scams than their older peers.

Gen Z staff appear much easier to hack than most with nearly half (46%) having not updated their work password for more than a year, compared to an average of a third (33%) of staff.

Researchers quizzed 3,000 UK office workers and business leaders from small, medium and large firms on a range of cybersecurity matters, including awareness of AI phishing attacks.

The study revealed the majority of UK businesses (94%) do not feel adequately prepared to manage the rising threat of sophisticated AI-driven phishing attacks.

Dr Katie Paxton-Fear is a cybersecurity lecturer at Manchester Metropolitan University and also an 'ethical hacker' who 'hacks companies before the bad guys do'
In a bid to raise awareness, Katie has revealed how easily cyber criminals can use AI to clone people's voices and impersonate them over the phone – with the victim often none the wiser

In a bid to raise awareness, Katie has revealed how easily cyber criminals can use AI to clone people's voices and impersonate them over the phone – with the victim often none the wiser.

Hackers need just 'three seconds of audio' – like a voicemail - to clone someone's voice. They also often follow five simple steps to carry out their 'vishing' scam (voice-clone phishing scam).

To demonstrate this, businessman and entrepreneur Chris Donnelly challenged Katie to hack his business to see how easily criminals could use AI to defraud him.

Chris has been an entrepreneur for 15 years and is the founder of Lottie, a health tech platform for care homes.

Read on below as Katie explains the steps cyber criminals take to hack a company using AI voice cloning.

1. Reconnaissance 


Recommendations given to the UK Government to ensure businesses stay safe from cyber-scamming by AI

Launching a 'Cyber Safe' PR campaign: Develop a nationwide PR campaign to promote Cyber Resilience Centres (CRCs) and the Cyber Essentials certification among businesses of all sizes.

Reallocating funding for local cybersecurity training: Reallocate funds within the National Cyber Security Strategy budget to support targeted local initiatives for businesses, focusing on effective engagement programmes.

Enhancing cybersecurity skills to prevent AI-led cyber-attacks: Promote the development and adoption of AI-driven cybersecurity tools and provide training to businesses on preventing AI-led cyber-attacks.

Expanding Cyber Resilience Centres (CRCs): Establish additional CRCs in underserved regions and enhance the capabilities of existing centres to offer tailored support for businesses. 

Source: Vodafone Business 


Katie said: 'Any hack starts out with reconnaissance'. A hacker will find a victim and go onto their social media.

In this case, Chris is a public figure with thousands of followers across various social media platforms. His profiles reveal details about his staff and what jobs they do for him.

Now a hacker has both an unsuspecting boss and his equally unaware employee in their sights.

2. Voice cloning

Now the hacker will browse the boss's social media pages to find audio or video content.

Katie said: 'All we need to do is visit Chris's social media pages, download some video and copy his speech style. We only need three seconds of audio.'

AI voice cloning software can use the recording to recreate Chris's voice – now all the hacker needs to do is type in what they want their victim to say.

In this case, Katie types in 'Have you managed to pay the invoice I sent over?' – and the message is repeated in Chris's voice.

3. Make contact

The hacker sends a text to the employee pretending to be from their boss – even though it's from an unknown number, they tell them to expect a call.

In this case Chris's employee receives the text and awaits his boss's call.

4. The call

Now for the call. The hacker dials the employee from their computer using a piece of software, they then simply type in the message they want the cloned Chris to say.

In the video, the employee hears his boss Chris ask him 'have you managed to pay the invoice I sent over? It's crucial this gets settled immediately'.

New research from Vodafone Business suggests that office juniors are putting their workplace more at risk of AI phishing attacks than any other age group

What is the employee to do? He has been given a direct order by his boss.

5. The wait

The employee has been given specific instruction on how to make the payment. Now it is a case of waiting to see whether they will do it.

Katie said: 'The final step is whether or not the victim takes the action. Most hackers will know if they've been successful by the end of the phone call'.

Chris Donnelly, Entrepreneur and CEO, Lottie, said: 'Cybersecurity has always been a priority for my business, it's something we think about all the time, and we ensure we keep our security protocols as updated as possible.

'You can imagine my surprise by how effortlessly the ethical hacker was able to breach our defences using sophisticated AI phishing tactics, like voice cloning.

'As someone who runs a health tech platform where we manage vast amounts of personal and private data, this experience highlights the importance of staying one step ahead in cybersecurity, especially with evolving AI threats.

Katie warned: 'With AI, attackers can tailor messages to appear highly personalised, making it harder than ever for employees to distinguish a fake email from a legitimate one.

'It's a wake-up call for all businesses to strengthen their security measures and provide consistent training for staff to protect against even the most advanced forms of deception. Today, staying vigilant and adaptive is essential to protecting our organisation and clients.'

Katie added: 'With AI, attackers can tailor messages to appear highly personalised, making it harder than ever for employees to distinguish a fake email from a legitimate one.

'Businesses, no matter their size, need to understand the real risk at hand and take proactive measures to defend against these threats.

'Strengthening cybersecurity practices, implementing advanced detection systems, and educating staff on recognising AI-driven scams are essential steps to safeguard valuable data and maintain trust.'