In an increasingly digitized world, security in the virtual realm becomes more and more of a pressing issue.
Especially with the pandemic accelerating the transition of everyday commitments and operations toward the digital realm, the issue of maximum security and safety becomes even more acute. Suddenly, a lot more is at stake in the face of growing cyberattack risks, and there is a dire need for advanced technology that will keep up with the pace of digital transformation across the globe.
Here is where artificial intelligence can come in. With a new horizon of capabilities, I believe the combination of AI and cybersecurity measures offers unique opportunities, insights and speed that have never been seen before.
According to a study by Capgemini, as of 2019, nearly two-thirds of the companies surveyed thought that AI would help identify critical cyberthreats. At the same time, 69% of organizations believed AI would be integral to quick and timely responses to cyberattacks. And while only about 1 in 5 organizations used AI before 2019, almost 2 out of 3 organizations were already planning to employ AI by 2020.
And in 2020, with the pandemic’s transformative effect, this embracement of AI has only become more solidified.
The list of AI’s capabilities that can bolster cybersecurity is long. AI can analyze user behavior, deduce a pattern and identify various deviations from the norm, making it possible to quickly identify vulnerable areas in the network. AI can also allow companies to automate routine security responsibilities with high-quality results, making it possible to trust the algorithm with the base of security and focus on more high-involvement cases that require human judgment. Companies can also use it to quickly find traces or signs of malware.
Spotting a wide range of variations of malware — specifically nowadays when they tend to be thoroughly and intentionally disguised — is not an easy task. But I believe AI technology has the capability to evolve as fast as malware in order to prevent future attacks from unrecognized sources.
Machine learning algorithms, trained on the wide variety of malware detected previously, can develop the ability to foresee and predict which versions of malware might be introduced in the future. So when it comes across a new form of malware — whether it is a slightly altered type of existing malware or a completely new kind — the system can easily cross-reference against the database it used when it was trained to spot commonalities in the patterns and produce a response based on previous success stories of blocking similar malware.
Identifying and responding to new types of malware is only one of the many actions AI could take in supporting cybersecurity. In fact, an AI-powered cybersecurity network could also track users’ daily activities, gradually developing a deep understanding of their behavioral patterns and tendencies based on the available information on what actions they have taken previously. By analyzing this information, the AI technology could quickly recognize deviations from the norm and formulate an appropriate response.
An example of such monitoring is the following scenario: Imagine that an employee clicks on a phishing link — something they are not accustomed to doing very often. The AI-driven system can be trained to independently deduce that this type of action was atypical of the employee, thus flagging the activity as a potential risk to the security of the system. The network essentially leverages the power of machine learning to understand how employees work to efficiently identify instances of anomalous behavior that could potentially be harmful. And the important factor in this is that the machine learning algorithm should operate autonomously and concurrently with normal business operations, conducting ongoing analyses in the background and ensuring uninterrupted workflow across the entire organization.
In addition to its accurate response developed over time, AI reacts promptly and quickly to prevent future major disruptions to the organization’s normal flow of operations. For humans, it’s a complex issue — if not an impossible one — to constantly monitor everything and everyone in real time. But AI can take on this role of a guardian within an organization and introduce an unprecedented level of efficiency in quickly spotting and responding to malware or disrupting issues without distracting any other employees (unless intervention is necessary) or disrupting the business overall.
There are a few companies that already offer AI-powered cybersecurity solutions. Darktrace is among the better-known companies in the current climate. The company leverages the power of artificial intelligence to identify a wide range of threats in the earliest stages. Another example is SAP NS2, a subsidiary of SAP. SAP NS2 uses machine learning and data analytics technologies for tasks like monitoring system and user behaviors. Its solutions help government agencies and regulated customers protect sensitive information.
While I believe AI is the future of cybersecurity, there are also some challenges associated with the innovation’s introduction into the arena of security as a mainstream tool. As an article from the IEEE Computer Society explained, from a resources perspective, companies will need to invest a significant amount of time and money in ensuring they have the appropriate computing power, memory and data to build and maintain AI systems. In order to train the system to perform in such a stellar manner, companies also need to gain access to rich datasets with a variety of information on malicious software. Finally, there is always the risk that hackers and cyberattackers can also apply AI technology to develop malware that is more resistant to the sophisticated ML technology that companies put in place for security reasons.
I have no doubt that artificial intelligence is integral to the future of cybersecurity. It can accelerate internal processes, ensure granular monitoring at all times, and avoid interruptions to the workflow within organizations. However, in these earlier stages, it is also important to consider the risks of the technology falling into the wrong hands — and the opposite side of the security issue altogether.