The image of a hacker wearing a black sweatshirt in his basement as the computer screen illuminates his evil face is fun for movies. However, it downplays just how sophisticated these bad actors are in reality. The truth is the modern hacker is no longer sitting at their computer furiously typing code. Criminal organizations now more effectively use advanced AI models to automate nearly all their schemes, building exploits and scams at scale. Moreover, hackers will train their AI systems with machine learning (ML), providing these engines with terabytes or even petabytes of information to perform autonomous actions unimaginably faster than the capabilities of a human.
The obvious question becomes: if the bad guys are using AI, why can’t the good guys? Unfortunately, it’s not a matter of “can’t” but “aren’t.” Businesses, on average, take 277 days (around nine months) to identify and report a data breach. The reason is because most aren’t utilizing AI to defend against cyberattacks. Companies need AI security tools to react proactively and autonomously to potential threats – especially those anomalies occurring on their network. Thankfully, many security tools have begun shifting towards an AI-driven approach, and it is time brands did likewise to prepare for today’s modern threat landscape.
How Do Hackers Use AI to Execute Cyberattacks?
To illustrate the capabilities that organizations are up against, the moment a new vulnerability (a CVE or common vulnerabilities and exposures) gets added to the NIST vulnerability database and released to the public, malicious bots using AI will take the payload information and scrape the Internet looking for versions of this new vulnerability. Within fifteen minutes, someone gets negatively impacted.
What makes these bad actors so fast is that they use AI tools to automatically perform passive and active reconnaissance, scouring the Internet for conditions and then acting upon said conditions. For example, imagine a hacker leverages AI to passively conduct reconnaissance on a domain for Company Z. At the same time, this hacker will have another program running against a specific individual who works at Company Z based on what they found on LinkedIn or other social media sites, which they can scrape automatically as well.
From this research, the hacker will build their social engineering campaigns. Then, they will use another automated AI tool to scrape publicly available information, like email addresses associated with people at Company Z. Once these conditions align, the AI will automatically send out a tailor-made phishing campaign to that individual at Company Z.
Training AI Models to Defend Against AI-Enabled Cyberattacks
Organizations can protect against AI cyberattacks by leveraging the processing power of machines to detect threats. AI excels at taking an enormous amount of information from many different system logs, extracting key insights and identifying patterns. These logs (numbering in the hundreds of thousands) come from many applications, operating systems, physical servers, etc. Without the processing power of AI, businesses will not have the threat intelligence to act quickly and mitigate risks.
To that end, companies need to train AI with ML models to learn what a baseline set of data looks like for various machines and systems so that it can spot anomalies. Ideally, the AI security tools should study two to four weeks of data to develop a golden image of standard user or machine behavior, allowing it to identify unique or abnormal activity.
Nevertheless, companies might get into trouble if the attacker or the malicious software already exists when the AI gets trained. Disastrously, the AI will think that abnormal behavior is normal. Businesses should start with a manual audit of a golden image to establish a baseline. Once they have that clean baseline, they can begin the data model learning. Likewise, brands must constantly audit this baseline by feeding the AI information from authoritative sources, such as the government NIST vulnerabilities database.
Another consideration is that sometimes AI security tools can detect suspicious activity but can’t determine if this anomaly qualifies as a potential threat. In this case, there has to be a level of understanding mapped into the AI to recognize the set behaviors that characterize the beginning of an attack, such as someone attempting to turn off a logger or security chart.
Identifying and Remediating Zero-Day Exploits
A common topic in the battle against cybercriminals is the concept of a zero-day exploit or an attack that targets an unknown software vulnerability. These attacks can be devastating, and understandably, there is a shift in the cybersecurity industry to train AI to identify zero-day exploits and prevent them from happening. As such, it is not enough for an AI tool to alert an administrator about a possible attack – considering how comparatively slow humans are to AI. Instead, the AI must remediate the issue on its own. Businesses can use ML techniques to train AI to detect threats and react immediately to remediate the attack before it inflicts widespread damage throughout a system or network.
Of course, building such an AI security tool with these abilities in-house is no easy task. Unfortunately, there is a disparity between what security teams get for their budgets and what the vendors want to charge for these tools. Typically, when a vendor releases a new security solution, the customer must purchase an additional module to access those AI-driven automations. As a result, businesses rely on the budget-friendly alternative of open-source AI. This reality is not ideal but, for now, cybersecurity teams must continuously familiarize themselves with open-source AI to maximize its potential.
Cybersecurity and a Single Pane of Glass
No solution is perfect, regardless of how advanced an AI-enabled cybersecurity tool may be. Data breaches will happen eventually. In fact, businesses continue to rely on Internet of Things (IoT) devices and edge computing, widening the attack surface. While companies can’t stop 100% of data breaches, having a remote management platform – like Digi’s – can act as a single pane of glass, monitoring if a device is running on outdated software or a version of firmware with a critical vulnerability, and automating mass firmware and software updates. This system-wide visibility is paramount – especially as hackers become more sophisticated and their schemes more impressive.
About the author: Josh Heller is Manager of Security Engineering for Products and Services at Digi International. Prior to Digi, he held key security roles for a variety of renowned enterprises including security engineer for BestBuy, risk and security analyst for Cargill and quality control risk analyst at TCF Bank to name a few. An enterprise security pioneer and mentor, he has deep experience in critical infrastructure, disaster recovery planning and internal IoT frameworks for both software and hardware development lifecycles, with the ability to identify physical and cyber security threats in many forms. He holds certifications in AWS solutions, Netskope Security Cloud Introductory Training, Tanium Operations and core essentials. Josh received a degree in network management and security from Anoka Technical College and attended the University of Minnesota, where he actively participated in CSI Bootcamp Security and Training.
Edited by Erik Linask