Artificial intelligence is becoming ubiquitous. It is already in everyday devices including smartphones, TVs, and even speakers. It should not come as a surprise that it is also being harnessed in the field of cybersecurity.
The role of AI in cyber defense, however, is more complex than what most may likely be thinking. It is not all about advantages and benefits. As tech journalist Chris O'Brien suggested in a VentureBeat piece, "The race is on to determine whether AI will help keep people and businesses secure in an increasingly connected world or push us into the digital abyss."
Experts acknowledge that AI is a double-edged sword in the context of cybersecurity. It has the potential to become cybersecurity's salvation, but it can also be its biggest threat.
Much has changed in the world of cybersecurity over the years. Defenses have improved significantly with security strategies becoming more formidable especially as security experts have started working together. These improvements in cybersecurity, however, are the logical consequences of evolving threats.
Cyber defenses have been strengthened because attacks have advanced in their sophistication, frequency, and volume. Cybercriminals have been relentless and boundlessly creative in finding new ways to defeat existing security solutions. It makes sense to say that if the attacks did not evolve, defenses would have similarly not evolved.
The development of automated continuous security validation is one of the best examples of how cybersecurity has advanced in response to new threats. By now, security experts are aware that it is unlikely for cyberattacks to stop advancing, so defenses should become equally ceaseless in their evolution.
Continuous security validation is also an excellent demonstration of how AI can improve cybersecurity. Cyber defense entails the collection and analysis of large amounts of data. AI and machine learning can greatly improve the processing of security data to come up with timely and useful alerts on the latest threats and attacks.
Artificial intelligence helps boost cybersecurity in a number of areas. It improves the ability of security controls to detect, identify, remediate, and block threats, particularly in the following key aspects.
● Vulnerability management - This is the core function referred to in Mantin's explanation earlier. The use of cybersecurity tools almost always leads to the accumulation of vast amounts of data and the generation of endless security alerts or notifications. To make sure that the most crucial concerns are given urgent attention, many security solutions now employ artificial intelligence to determine the most critical threats or incidents. An example of this AI use case is the threat scores or color-coded indicators used in security validation platforms.
● Threat hunting - The traditional way of detecting and identifying threats relies on threat databases, wherein potential attacks or vulnerabilities are determined based on the available information about them. This approach, however, is becoming less and less effective as cybercriminals find creative ways to disguise their attacks in tandem with social engineering.
Big data and web analytics specialist Eddie Segal says that traditional threat detection has a 90 percent detection rate. It can go up a couple of percentage points if the thresholds are lowered and false positives are not controlled. With AI, the detection rate can go near 100 percent by detecting threats according to the behaviors or patterns of activity that may be regarded as anomalous.
● Datacenter optimization - Cybersecurity is not just about preventing attacks. A well-rounded security solution also takes mitigation and remediation into account, something AI can also handle.
Artificial intelligence can be integrated into the monitoring and optimization of data centers so they can operate at their optimum power consumption, temperatures, bandwidth usage, and backup power allocation. AI can calculate normal and anomalous behavior to implement adjustments that can mitigate the effects of an attack and make sure that an organization does not have to suffer downtimes or at least minimize instances of shutdowns.
Google did this with its data centers a few years back. Consequently, the company reported a 40 percent decrease in cooling expenses as well as a 15 percent reduction in power usage.
● Network security - Artificial intelligence and machine learning are also useful in boosting network security. This is possible by analyzing a network's topography and establishing security policies that are most suitable to a network's specific circumstances. In a way, this means that AI helps in rapidly enforcing a zero-trust security model for any organization.
On the reverse side, AI can also be a tool for attacks. Back in 2016, one cybersecurity firm developed an algorithm called SNAPR, which has the ability to post nearly 7 spear-phishing tweets in a minute. That may not sound impressive, but SNAPR achieved something that bad actors were ecstatic to know.
The algorithm made it possible to reach around 800 people with the spear-phishing attack and convinced 275 of the 800 to click on the malicious link in the Twitter posts. This performance is significantly more "productive" compared to the 1.075 tweets per minute and 125-people reach of an ordinary Twitter user. This means cybercriminals can dramatically increase the volume of their attacks with the help of AI.
On the other hand, AI technology can also make the generation of social engineering accessories or tools quicker and convincing. When trying to deceive someone to do something, for example, cybercriminals can make use of deepfakes, a product of AI technology. Deepfakes can leverage the identity or authority of a person to compel potential victims to perform certain actions.
Moreover, AI enables the rapid deployment of websites and blog or social media posts, which can be used to facilitate cyber attacks. In 2019, a site called ThisMarketingBlogDoesNotExist.com was introduced as part of an experiment to show AI's ability to generate fake content. The site was put up along with 30 polished blog posts in a matter of minutes. At this rate, cybercriminals can easily rebuild their "deception and attack infrastructure" whenever they are caught.
Just like a double-edged sword, the duality of AI as a tool for both good and bad actors is not something to be avoided. It is a presence that is here to stay, so the best response to it is to make it work for what is good and not let cybercriminals monopolize the advantages it can provide.
Fortunately, cybersecurity teams are not limited to using AI to augment their defenses. They also collaborate like in the case of the MITRE ATT&CK framework, which is also incorporated in security validation platforms. This collaborative platform provides security teams access to the latest cyber threat intelligence and updates on adversarial tactics and techniques for them to test security controls more effectively and implement adjustments or improvements as needed.