Cybersecurity blog header

What will the future of AI and cybersecurity look like?

The relationship between AI and cybersecurity is going to shape the future of humanity

The relationship between AI and cybersecurity will be essential to protect companies and institutions against increasingly sophisticated and dangerous cyberattacks

On May 11, 1997, the Deep Blue II supercomputer defeated Garri Kasparov, the world chess champion, in the sixth and final game of a duel that marked a before and after in the relationship between humans and technology. For the first time in history, the IBM engineers who developed Deep Blue II demonstrated that machines could be more intelligent than one of the most prodigious human minds in history.

More than 26 years later, Artificial Intelligence and robotization are critical players in many productive sectors (from the automotive to the financial sector). The relationship between AI and cybersecurity is set to mark the immediate future of our societies. Why?

On the one hand, the digitization of companies and households has meant that the attack surface available to cybercriminals is practically infinite. As a result, cyberattacks are growing in number, as well as in level of complexity and impact.

On the other hand, AI systems are increasing and perfecting themselves by leaps and bounds. Their potential to transform how we live, work, do business and communicate is enormous and beyond doubt. The greater its relevance and implementation in all areas of the economy and society, the more plausible it will be that criminals:

  • Employ AI to carry out attacks.
  • Launch attacks against AI systems themselves.

Next, we will review the impact of Artificial Intelligence in cybersecurity and, from there, perform a prospective exercise and explore where the future of AI and cybersecurity may evolve and what its relevance will be for companies, administrations and society.

1. The rise of UEBA systems

When addressing the relationship between AI and cybersecurity, we must focus on a solution that combines the two fields and has experienced a boom in the last five years. We are talking about User and Entity Behavior Analytics (UEBA) systems that, as the name suggests, analyze the behavior of users and entities. For what purpose? To protect the systems of companies or institutions.

How do UEBA systems work? They use machine learning to analyze large volumes of data to analyze the behavior of users and entities (such as servers or routers) and establish their usual or ordinary behavior through modeling.

Once the “normal” behaviors are modeled and set as the reference behaviors, the same Machine Learning and data analysis tools are used to identify behaviors that deviate from the usual and could be suspicious in such a way that the mechanisms for detecting events that affect the cybersecurity of a company’s or institution’s assets can be optimized to the maximum.

Moreover, thanks to UEBA systems, not only can both internal and external threats be detected, but they are also helpful for monitoring any potentially dangerous event and facilitating the response to security incidents.

For all these reasons, they have become a valuable tool for professionals who manage the defensive capabilities of companies and institutions.

2. AI and cybersecurity today

UEBA systems graphically demonstrate that cybersecurity is one of the pioneering fields in using Artificial Intelligence. AI models are already being used to optimize critical cybersecurity services to protect corporate assets:

Just as it is undeniable that AI poses several challenges for cybersecurity experts, it is also clear that this disruptive technology can be critical to the industry.

What can we expect in the future? New, more powerful AI systems that enable cybersecurity professionals to optimize the prevention, mitigation, detection, response, and recovery from security incidents will be developed. How? Using AI’s knowledge to design new tactics, techniques and procedures that enable security teams to anticipate criminals and improve the resilience of corporate and government systems.

AI and cybersecurity have a bidirectional and complementary relationship

3. Learning new strategies for beating the bad guys

How was Deep Blue II able to beat Kasparov? Because IBM engineers had taught him 700,000 chess games played throughout history.

However, Artificial Intelligence today is capable of going further. Not only do they limit themselves to learning from the knowledge generated by humankind, but they can also be self-learners.

For example, Artificial Intelligence, such as Alpha Zero, can learn strategies for playing chess and other similar games, such as Go, without the need to process previous games. How? By competing against themselves. In such a way that they can develop game strategies that:

They can beat the best strategies conceived by people over the centuries.

They highlight people’s creative limitations and question some classical postulates. For example, in chess or Go, there are sure exits or moves that experts consider wrong or give poor results. And yet, AIs have used them successfully.

3.1. Being more imaginative than criminals

How does this translate to the relationship between AI and cybersecurity? Soon, artificial intelligence will not only optimize organizations’ defensive capabilities, design and execute pen-testing services, or implement Red Team scenarios but will also help cybersecurity professionals think of new strategies, tactics, techniques, and procedures, pushing the limits of human imagination.

The relationship between the experts who protect companies, institutions and citizens and the cybercriminals who attack them is a perpetual game of cat and mouse. Whenever the former devise mechanisms to hinder the TTPs employed by malicious actors, the latter are forced to modify their strategies.

In this sense, the relationship between AI and cybersecurity can help defensive teams develop innovative solutions to combat cyberattacks, forcing malicious actors to devise new methodologies at a constant cost.

After all, today’s chess grandmasters are no longer obsessed with beating AIs but with employing them to enrich the way they play.

In the future, AIs can be confronted to improve defensive capabilities against cyber-attacks

4. What if Kasparov were also a machine? AI vs. AI

So far, we have delved into the relationship between AI and cybersecurity by focusing on developing models that can be used to provide the best cybersecurity and cyberintelligence services. Still, we must address another trend that will be key to the future of AI and cybersecurity: creating environments in which two AI systems can be pitted against each other:

  • A defensive AI capable of detecting intrusions, abnormal behavior or security breaches that some hostile actor might employ to gain access to information that should be beyond its reach. In other words, a defensive AI, at the service of the companies’ defensive teams, would function as another element of the organization’s security strategy.
  • An attacking AI. Or, in other words, a system designed to attack a company, exploit vulnerabilities, infiltrate an organization and go unnoticed by the defending AI. Competition between the two would help them to improve their ability to detect malicious actions, in the case of the defensive AI, and to learn to be stealthy and find new techniques to go unnoticed and spread across systems in the case of the attacking AI.

This leap in the relationship between AI and cybersecurity may be essential soon. It will help professionals and experts successfully tackle the security challenges brought by cyberattacks and those that use AI to commit fraud and disrupt the functioning of companies and institutions.

5. AI and cybersecurity tomorrow

The dizzying evolution of Artificial Intelligence and the increasing complexity of an ever-expanding cyber threat landscape make it challenging to forecast how both disciplines will evolve in the coming years.

However, the European Union Agency for Cybersecurity (ENISA) has produced a report addressing the critical cybersecurity risks for the remainder of the decade.

Most of the threats that companies and public administrations will face are related to Artificial Intelligence: advanced disinformation campaigns with deepfakes, digital surveillance and the rise of authoritarianism thanks to facial recognition, targeted attacks using data from smart devices to create behavioral models of the victims, hybrid attacks using Machine Learning algorithms

But AI attacks are also one of the most worrying threats of the present and, above all, of the immediate future. Especially if we consider the growing relevance of these systems in companies’ operations and people’s daily lives.

5.1. Communicating vessels that give feedback to each other

What do these threats show? The relationship between AI and cybersecurity is bidirectional and complex. Cybercriminals can use Artificial Intelligence to achieve their criminal objectives and successfully attack companies, public administrations and citizens. But, at the same time, AI is already present in the day-to-day work of cybersecurity professionals and has the potential to transform the way they work to help them protect companies, institutions and individuals.

Likewise, the relevance of AI systems makes them priority targets for malicious actors. Cybersecurity experts are therefore called upon to play an essential role in optimizing the defenses of these systems.

In short, the coming years will bring enormous changes that will once again transform the economy and people’s lifestyles. Faced with a panorama full of uncertainties, our only certainty is that the relationship between AI and cybersecurity will shape the world’s future.

More articles in this series about AI and cybersecurity

This article is part of a series of articles about AI and cybersecurity

  1. What are the AI security risks?
  2. Top 10 vulnerabilities in LLM applications such as ChatGPT
  3. Best practices in cybersecurity for AI
  4. Artificial Intelligence Fraud: New Technology, Old Targets
  5. AI, deepfake, and the evolution of CEO fraud
  6. What will the future of AI and cybersecurity look like?
  7. The Risks of Using Generative AI in Business: Protect Your Secrets