Among the biggest errors any organisation can make is to fail to account for how emerging technologies will affect their industry. This is particularly true for cyber security professionals, who must understand the impact of every new technology on their organisation’s security status. The tools deployed to prevent and recover from cyber-attacks must constantly evolve as malicious actors learn to use new technology for criminal purposes.
Of all the new technologies to affect the art and science of cyber security, artificial intelligence (AI) is among the most important. As the cyber-environment continues to expand, advances in machine learning and AI are having a major impact on the scale of cyber security requirements, how malicious attacks are conducted, and what tools are available to defend against them.
The development of AI affects the threat environment in three key ways; firstly, it expands the number of things that cyber professionals need to protect. Second, it empowers criminal hackers and their employers by giving them new tools with which to evade detection, and the means to conduct large-scale operations. Thirdly, it empowers professionals by giving them the new tools with which to detect, classify and manage threats.
AI expands the amount that needs protection
Advances in AI, bolstered through the power of cloud processing, are enabling the automation of an increasing number of activities.
The addition of ever-more devices to networks—be they sensors, voice-control devices, and other AI-related “smart” technologies—has exponentially increased the amount of available attack surface for cyber criminals to exploit. As these technologies extend the Internet into everything from cement to medical devices, cyber experts must now protect things that were previously completely outside their purview.
The growing complexity that results from attaching ever-more devices to networks makes them significantly harder to monitor effectively, increasing the risk that a vulnerability will go unnoticed. Many large organisations run systems with known security flaws because of the difficulty of ensuring that every part of a complex network stays up-to-date. WannaCry, a ransomware virus, was able to successfully shut down computers run by the UK National Health Service, Boeing, and FedEx, simply because those organisations had failed to keep their systems updated with the latest security patches.
When a network reaches a certain magnitude and complexity, its administrators can find themselves unable to ensure that even basic precautions are implemented. In the past, most networks were relatively small in size, but the rise of ubiquitous intelligent automation is making vast networks increasingly common.
AI gives cyber criminals new tools to evade detection
The use of AI tools offers three things to cyber criminals: power, speed, and the ability to conduct more sophisticated attacks at scale. Self-upgrading botnets and viruses capable of pretending to be human are just a few of the threats that organisations must defend against. The efficiencies that AI provides to cyber criminals are at the heart of the challenges that cybersecurity professionals will face in the next decade.
In 2016, a study into the use of AI to execute business email compromise (spear phishing) attacks—which compromise a business by defrauding unsuspecting employees—found that AI-mediated attacks were successful 34% of the time, versus 38% for human-mediated attacks. The difference? During the time it took a human to attempt 130 attacks the AI was able to attempt over 800, making it massively more successful than the human attacker.
One of the most daunting cyber security challenges related to AI is its ability to allow malware to hide. Right now, most malware that knows how to hide is explicitly programmed to do so using a specific method or tactic; in contrast, AI enabled malware can determine for itself what the best method is to hide inside a particular system. It does this by monitoring a network’s normal activity in order to learn how to operate in a manner that won’t raise any red flags, decreasing the probability it will be detected until after it has caused its intended damage.
AI empowers cyber security professionals by giving them new tools to manage threats
AI is a double-edged sword for cyber criminals; just as it offers power and speed to them, it does the same for cyber security professionals.
The speed offered by AI is fast becoming a necessity for managing the increased complexity of modern day networks. AI-enabled threat protection software like Darktrace can utilise autonomous functions to monitor activity, look for abnormalities, and triage potential issues. Furthermore, AI programs can make sure that defences are always active and responsive to threats, even if a professional isn’t on-site at a particular moment.
In some cases, the manner in which cyber security professionals take advantage of AI actually mirrors how it’s used by criminals.
Just as AI malware can learn what normal network activity looks like in order to hide amongst it, AI-powered protection can learn what normal network activity looks like in order to detect deviations that would indicate an intrusion.
Whether a given anti-malware program is able to successfully detect a malicious or hiding AI depends on two factors: the quality of the algorithms used by both AIs, and the amount and quality of the data that each AI was trained on. For an AI approach to be useful, it must have access to large amounts of high-quality data. It is for this reason that IBM’s Watson for Cyber security publicises its ability to ingest and interpret large amounts of unstructured data—such as academic papers—as one of its primary selling points.
One of the major advantages that cyber security professionals typically have over their adversaries is access to high-quality data. Driven by the collective benefit that organisations receive when they pool intelligence, cyber security professionals can stream terabytes of data into AI-enabled programs, increasing the accuracy of the judgments those systems can make. This provides a leg-up on cyber criminals, who typically lack readily available access to the amount of data necessary to optimally train AI based systems.
Looking ahead: the future of AI and cyber security
Advances in AI are driving digital transformations that will have huge effects on the future of cyber security. AI-driven monitoring and automation solutions are spreading throughout global supply chains, making them more efficient on the one hand, and more vulnerable to intrusion and manipulation on the other.
The potential damage of a system being breached by cyber criminals is growing rapidly as smart technology is integrated throughout cities and transport systems. Transportation and logistics firms that previously needed to focus on physical security must now also protect themselves from penetration by hackers; system-critical parts of global supply chains are fast becoming vulnerable to disruption.
The 2013 hacking of a maritime port in Antwerp provides an early example of cybercrime entering the physical world. In that case, drug traffickers were able to hack container terminal systems in order to steal containers they had previously compromised and loaded with drugs. Current research into the effects of AI on the security of maritime container terminals predicts that the potential damage from a cyber-attack is rising as an increasing number of shipping process are being automated with AI.
There are also risks associated with the use of AI for cyber-protection, which data from Cisco’s 2018 cyber-security survey suggests is growing fast. Many AI programs rely on machine learning techniques that focus on identifying malware based on a set number of predetermined identifiers; according to MIT, if a cyber-criminal is able to identify what those features are, they can remove them from their malware in order to make it “invisible” to the AI. Were a particular piece of popular AI-driven detection software to be compromised in this manner, the resulting damage could be severe.
Cyber security professionals are thus “future-proofed” against AI.
Just because a program says it has “AI”, doesn’t mean it’s good, and if it is good, that doesn’t mean it’s invincible. Incautious use of AI for cyber security is risky behaviour; the expertise of trained cybersecurity professionals is needed to manage and review threat protection activity, regardless of whether that activity includes the use of AI. It doesn’t replace cyber experts, it frees them to focus more of their efforts on carrying out the specialised tasks they learned over the course of their professional training.
Find out more about becoming an expert by studying ECU’s 100% Online Master of Cyber Security.