Deepfakes are ranked the biggest AI threat by experts
- There’s no doubt AI has powerful and positive potential in business, but only when wielded responsibly
- With malicious intent, the technology can prove a sophisticated adversary. Experts agree that deepfakes are the biggest threat
From creating highly-personalized customer experiences to detecting fraud in reams of financial data, AI has powerful applications in business. In its many forms, it’s probably the most talked-about technology today.
But in the wrong hands, AI can be a fearsome tool.
A recent study set out to explore the threats posed by AI, analyzing current risks posed by its use in applications for crime and terrorism. Funded by UCL’s (University College London’s) Dawes Centre for Future Crime, the study identified 20 ways AI could be used to nefarious ends over the next 15 years, asking 31 AI experts to rank them based on their potential for harm, including money they could make, how easy they were to use, and how hard they were to stop.
The experts – who represented AI experts from academia, the private sector, police, government, and state security – agreed that deepfake technology represents the largest threat.
As deepfake technology continues to advance, specialists said that fake content would become more difficult to identify and stop, and could assist bad actors in a variety of aims, from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call.
Such uses could undermine trust in audio and visual evidence, the authors said, which could have great societal harm. Worryingly, both those aforementioned scenarios have played out – or similar versions of them.
Last year, the potential power of AI fakery drew mainstream focus, when a deepfake video emerged on Facebook of the platform’s founder Mark Zuckerberg discussing the power of holding “billions of people’s stolen data.”
The same year also saw scammers leveraging AI and voice recording to impersonate a business executive, adding his “slight German accent and other qualities” to successfully request the transfer of hundreds of thousands of dollars of company money to a fraudulent account.
Javvad Malik, Security Awareness Advocate at KnowBe4, told TechHQ: “The use of technology to impersonate a chief executive has some scary implications, especially given the fact that it is not inconceivable that coupled with video, the same attack could be played out as a video-call.”
The rise of deepfakes has been rapid within the last couple of years. A report by Deeptrace in 2019 found that videos had doubled in quantity in year-on-year, with nearly 15,000 uploaded online. The ‘deepfake phenomenon’, the researchers said, could be attributed to an increase in the commodification of tools and services that lower the barrier to non-experts.
Discussing the results of the latest report, lead author Dr. Matthew Caldwell said that with people now spending large parts of their lives online, “online activity can make and break reputations.”
“Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity,” he said.
“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”
Aside from fake content, five other AI-enabled crimes were judged to be of high concern. These included using driverless vehicles as weapons, helping to craft more tailored phishing messages (spear phishing), disrupting AI-controlled systems, harvesting online information for the purposes of large-scale blackmail, and AI-authored fake news.
Other crimes of lesser concern included the sale of items and services falsely labeled as ‘AI’, including in cybersecurity and advertising, which could help drive profits.
Those of lowest concern included ‘burglar bots’ – small robots used to gain entry into properties through access points such as letterboxes or cat flaps – which were judged to be easy to defeat, for instance through letterbox cages, and AI-assisted stalking, which, although extremely damaging to individuals, could not operate at scale.
READ MORE
- Data Strategies That Dictate Legacy Overhaul Methods for Established Banks
- Securing Data: A Guide to Navigating Australian Privacy Regulations
- Ethical Threads: Transforming Fashion with Trust and Transparency
- Top 5 Drivers Shaping IT Budgets This Financial Year
- Beyond Connectivity: How Wireless Site Surveys Enhance Tomorrow’s Business Network