Today I was at an event and I was asked “so what are you worried about with AI”, it made me think that I haven’t made many articles about the security issues around AI. So on this security Thursday I’m going to highlight one of the issues that I know is going to slowly affect the world and the sooner everyone is aware of it the better.
Hackers are already using AI to create programs and bots but what we have seen so far is just scratching the surface. I think it will still be 6–18 months before we see a massive AI-powered cyber security threat on a level that will be so disruptive that there won’t be a single person that hasn’t heard of the “Colossus AI Network Terminator ‘’ or whatever they decide to call it. But the more immediate concern I have is that scammers are going to start using AI-powered tools for social engineering.
Deep fakes have been around for quite some time and AI & ML is sitting at the core of that technology. Over a year ago there was a very funny deep fake video called “Groomsmen”, a parody of the movie Bridesmaids, but back then they had to get voice actors to play the characters. Now we have AI that can mimic other people’s voices and it’s so convincing that you could swear you’re talking to that person.
As AI continues to advance at an unprecedented rate, the ability to replicate human voices has become a reality. This technology will enable scammers to deceive victims by using the voices of people they know and trust, leading to serious consequences for those who fall victim to such crimes. Understanding how imitation is used in digital social engineering and being aware of common red flags can help prevent individuals from becoming victims of these scams.
Voice cloning is a type of AI-generated manipulation that mimics an individual’s speech patterns and vocal characteristics. By inputting a sample of existing speech, AI algorithms can produce an accurate copy of the original audio file. The sample sizes required to clone the audio has come down drastically, now you only need as little as 3 seconds of audio and you can synthesize someone’s voice. This allows fraudsters to imitate their victim’s voices convincingly, making it difficult for victims to detect that something isn’t quite right.
The potential implications are far-reaching, as scammers can use AI-generated audio files to fool anyone from personal friends and family members to business associates and co-workers. In most cases, victims may be unaware they are talking with someone other than who they believe until it’s too late. To protect themselves from such scams, it’s important for individuals or businesses to be aware of suspicious changes in behavior or tone during phone or video conversations that could indicate digital social engineering. Asking additional questions about the identity of the caller, requesting further verification if needed, and implementing multi-factor authentication protocols for all phone calls can provide an extra layer of security against these threats.
In time we are going to start seeing organizations investing in AI solutions that detect inconsistencies between human voices and AI-generated audio files so any fraudulent imitations are identified quickly before any damage is done. By understanding how imitation works in digital social engineering, businesses can stay one step ahead of scammers today and protect not only themselves but their customers as well.
I believe the best way to ensure your customers are protected from such scams is to educate them on digital social engineering and how to recognize red flags. You can do this by sharing blog posts, infographics, or emails with useful tips and resources. While there is no surefire way to protect against voice cloning, educating yourself and your customers can be an effective way of staying ahead of fraudsters.