UN and Europol Warn of Growing AI Cyber-Threat
Cyber-criminals are just getting started with their malicious targeting and abuse of artificial intelligence (AI), according to a new report from Europol and the UN.
Compiled with help from Trend Micro, the Malicious Uses and Abuses of Artificial Intelligence report predicts AI will in the future be used as both attack vector and attack surface.
In effect, that means cyber-criminals are looking for ways to use AI tools in attacks, but also methods via which to compromise or sabotage existing AI systems, like those used in image and voice recognition and malware detection.
The report warned that, while deepfakes are the most talked about malicious use of AI, there are many other use cases which could be under development.
These include machine learning or AI systems designed to produce highly convincing and customized social engineering content at scale, or perhaps to automatically identify the high-value systems and data in a compromised network that should be exfiltrated.
AI-supported ransomware attacks might feature intelligent targeting and evasion, and self-propagation at high speed to cripple victim networks before they’ve had a chance to react, the report argued.
By finding blind spots in detection methods, such algorithms can also highlight where attackers can hide safe from discovery.
“AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology.” said Edvardas Šileris, head of Europol’s Cybercrime Center.
“This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems.”
To that end, the paper highlights multiple areas where industry and law enforcement can come together to pre-empt the risks highlighted earlier. These include the development of AI as a crime-fighting tool and new ways to build resilience into existing AI systems to mitigate the threat of sabotage.