Automation and artificial intelligence (AI) have seen tremendous growth in recent years. From chatbots handling customer service to advanced algorithms optimising business processes, the benefits are clear: efficiency, scalability and cost savings. However, as with any technological advancement, there is a dark side. Cybercriminals are increasingly exploiting these technologies, and the rise of so-called AI agents brings new risks. In this in-depth article, we explore the dark side of automation, with a particular focus on the emergence of AI agents and their impact on cybersecurity.
TEXT: SANDER HULSMAN IMAGE: SHUTTERSTOCK
Automation has opened doors not only for legitimate businesses, but also for cybercriminals. Where hackers once had to carry out attacks manually, they can now use automated tools to exploit vulnerabilities on a large scale. Think of phishing campaigns that can send thousands of emails per minute thanks to automation, or malware that automatically spreads through infected networks.
An example of this trend is the growing popularity of malware-as-a-service (MaaS). Cybercriminals can purchase ready-made malware from darknet marketplaces, which can then be deployed with minimal technical knowledge. These tools often come equipped with advanced features, such as automatically scanning systems for vulnerabilities or encrypting files for ransomware attacks.
AI agents are software programs designed to perform tasks autonomously, often using machine learning and other AI techniques. Unlike traditional software, which strictly follows its programming, AI agents can learn from experience and adapt their behaviour to new situations.
In a legitimate context, AI agents are already widely used. Think of chatbots that answer customer queries, algorithms that analyse financial transactions for fraud, or systems that optimise supply chains. These agents are designed to work efficiently and accurately, often without human intervention.
But in the hands of cybercriminals, AI agents take on a very different role. They can be deployed for a wide range of malicious activities, from generating realistic phishing emails to identifying and exploiting vulnerabilities in systems. The self-learning nature of these agents makes them particularly dangerous, as they can continuously adapt to new security measures.
The possibilities for AI agents in cybercrime are almost endless. Here are some examples of how they are currently being deployed:
The rise of AI agents poses new challenges for cybersecurity professionals. Traditional security measures, such as firewalls and antivirus software, are often no match for these advanced attacks. Cybercriminals can continuously adapt their tools, making signature-based detection methods less effective.
Another issue is the scale at which attacks can be carried out. Using AI agents, cybercriminals can target vast numbers of systems in a short period of time. This makes it harder for organisations to defend themselves, especially if they lack the necessary resources or expertise.
In addition, the self-learning capabilities of AI agents create an asymmetric battle. While cybersecurity professionals often work reactively – responding to attacks after they have occurred – AI agents can proactively identify and exploit new vulnerabilities. This gives cybercriminals a significant advantage.
To protect themselves from the threat of AI agents, organisations must adapt their cybersecurity strategies. Here are some key steps:
Automation and AI offer immense opportunities, but they also bring new risks. The rise of AI agents has taken cybercrime to a new level, with advanced and large-scale attacks that are difficult to detect and combat. For organisations, it is crucial to proactively invest in cybersecurity and adapt to these evolving threats.
Only through a combination of technology, training, collaboration, and regulation can we tackle the dark side of automation. The fight against AI-driven cybercrime is complex, but with the right approach, organisations can arm themselves against this new generation of threats.
This article is based on insights from Group-IB’s blog on the dark side of automation and the rise of AI agents. More information can be found at www.group-ib.com.
Edition #08 – April 2025
Welcome to the capital of Europe
Why Cybersec is another must this year
Computable & Cybersec Awards at Cybersec Europe
Cybersec Europe 2025: Is your organization secure?
1.5 billion crypto hack raises security questions
Securing the future of ports: the Oulu initiative
Lenovo: A smarter way to transform your business!
The rise of security platforms
Salary CISO not commensurate with workload
War on talent in cybersecurity: six key messages from the frontline
Strong cooperation between CISO and board is a must
RCDevs: A European answer to modern CISOs’ security challenges
The dark side of automation and the rise of AI agents: a new challenge for cybersecurity
Cybersec Netherlands strengthening partnership with Security Delta (HSD)
3 Steps to an Identity Security Strategy