Cybersecurity blog header

What are the AI security risks?

AI security risks include not only threats against systems

Artificial Intelligence is set to revolutionize our economy and way of life. But… What are the AI security risks?

What literature or movies raised as a possibility for decades has become today, a tangible reality. Artificial Intelligence is already part of our lives. It has become one of the significant issues of this era in the heat of Machine Learning or generative AI, so much so that Artificial Intelligence is set to change our productive fabric and how we live. But what are the security risks of AI?

In recent years, and especially in 2023, various organizations have increased their production of methodologies and guides to focus on the security risks of AI and help companies prevent them successfully.

Thus, the European Union Agency for Cybersecurity (ENISA) has published various frameworks, methodologies, and reports addressing AI security risks and the challenges companies must face.
The US National Institute of Standards and Technology (NIST) has created a framework for managing AI security risks. The OWASP Foundation, a global methodological benchmark, has launched a project to address AI security risks.

On the regulatory front, the European Union is taking the final steps in the processing and approval of the Artificial Intelligence Act. The draft regulation, which the European Parliament and the Council must now negotiate, emphasizes the relationship between cybersecurity and AI.

Below, we will break down the main AI security risks that companies developing Artificial Intelligence and using this disruptive technology must take into account to detect threats, prevent security incidents, and comply with a regulatory framework that will become increasingly demanding.

1. AI is one of the great allies of cybersecurity

ChatGPT, Midjourney, DALL-E, Copy AI… 2023 will be marked in our memories as the year in which generative AIs, i.e., those capable of creating content and responding to people’s requests, have captured the attention of the world’s public opinion.

However, the history of Artificial Intelligence can be traced back to Alan Turing, with decades of research and development of Machine Learning, neural networks, Deep Learning, and natural language processing tools behind it.

Artificial Intelligence is already present in multiple devices and technologies that companies and citizens use daily to automate tasks or optimize decision-making. In this sense, Artificial Intelligence has become a great ally of cybersecurity professionals and companies in strengthening their defensive capabilities and improving their resilience against cyberattacks.

Without going any further, thanks to Machine Learning tools, it has been possible to automate the detection of threats or the implementation of response mechanisms to attacks. As well as optimizing security assessments and the prioritization of vulnerabilities, predicting attack patterns of hostile actors, extracting high value-added information from data to identify vulnerabilities before criminals do, or improving forensic analysis to remedy detected problems.

1.1. And AI cybersecurity

With the growing relevance of AI systems and their potential for businesses and citizens, AI security risks have become a significant issue in terms of cybersecurity.

Just as Artificial Intelligence is critical to designing and executing cybersecurity services, these are crucial when it comes to undertaking the protection of AI systems and applications at a time when:

  • Cybercriminals are beginning to target AI.
  • Risks linked to the supply chain are becoming increasingly high.

2. Data, models, cyber-attacks. AI security risks

If there is one fundamental element when it comes to AI, especially Machine Learning and Deep Learning systems, it is data. These systems work thanks to models that must be trained with data.

If the data are numerous, varied, unbiased, and have not been manipulated, the model can function optimally and perform at a high level. On the other hand, if the data used to train the models has been corrupted, the model will exhibit manipulated behavior that can have severe consequences for companies and users.

On this basis, we should note that, when addressing AI security risks, we must differentiate between:

  • Threats targeting AI systems. Attacks in which the AI systems are the target: models, data, etc…
  • The malicious use of AI tools to launch cyber-attacks against enterprise software, systems, or individuals.

2.1. Risks to AI systems

The project launched by OWASP to assess the leading security and privacy risks facing Artificial Intelligence identifies several dangers and pays particular attention to potential attacks against AI models.

2.1.1. Data security risks

The AI pipeline should be considered a new attack surface since it lies beyond the traditional realm of software development. Why? It incorporates data science.

Both data engineering and model engineering are essential to developing AI systems. Both disciplines require robust security controls to prevent data leakage or poisoning, intellectual property theft, or supply chain attacks.

NIST has created a framework for managing AI security risks

On the other hand, the risk is associated with data use during AI development. To train and test a model, data scientists need to work with accurate data, which could become sensitive. Hence, a rigorous access control mechanism must be implemented in which data scientists can only access the information they need to do their work.

2.1.2. Attacks against AI models

As noted above, attacks against an AI model are vital to addressing AI cybersecurity. These high-risk attacks can be prevented:

  • Protecting the AI development process.
  • Hiding the model parameters.
  • Limiting access to the model.
  • Implementing a monitoring system to detect malicious inputs.
  • Considering this kind of attack during the training phase of the model.

In such a way, combining knowledge in cybersecurity with training in Machine Learning is necessary. In addition, classical cybersecurity measures, such as applying the principle of least privilege, can also be implemented. Attack types

OWASP compiles the following types of attacks against AI models:

  • Data poisoning. If the training data changes, the model’s behavior can be manipulated. This change makes it possible to sabotage the AI system or get it to make decisions the attacker wants.
  • Input manipulation. This attack seeks to manipulate models with misleading input data. Prompt injection is the paradigmatic example of this type of attack.
  • Membership inference. A data record and a black box access to the model can determine whether a document was in the training data set. This issue implies that hostile actors can know whether a person suffers from a specific disease, is part of a political party, or is enrolled in a particular organization.
  • Model inversion or data reconstruction. By interacting with a model, its training data are estimated. If these data are sensitive, privacy may be compromised.
  • Model theft. Interacting with a model can lead to determining its behavior and copying it to train another model, which is intellectual property theft.
  • Model supply chain attack. These attacks can manipulate a model’s life cycle, for example, by contaminating a base model that has been made public and corrupting Deep Learning models that use transfer learning to refine that model.

2.1.3. Maintainability of AI Code

Data scientists are focused on producing working models and less on creating code that is easy to read by other practitioners. This decision makes it challenging to analyze AI code, detect bugs, or vulnerability management. Hence, it is essential to combine the knowledge of data scientists with the training and experience of software engineers and cybersecurity experts.

2.1.4. Complexity of the AI Supply Chain

Artificial Intelligence makes the software supply chain more complex. Firstly, because AI systems usually have several supply chains (data, models…), the provenance sources can be parallel or sequential. If we add to this the relevance of attacks against models and their behavior cannot be evaluated by static analysis, we face a highly relevant risk.

Therefore, the traditional software bill of materials (SBOM) must be complemented by the AI bill of materials (AIBOM) while taking the necessary measures to audit suppliers’ security. AI supply chain management thus becomes an essential aspect of AI security.

2.1.5. Reuse of external AI code

As with traditional software development, data scientists can benefit from open-source code, although it may contain weaknesses and vulnerabilities that affect security and privacy. Therefore, a thorough control of reused code must be carried out.

2.2. AI cyberattacks and optimization of offenders’ capabilities

In addition to all the attacks directed against AI systems, this disruptive technology can be used by criminals to optimize their attack capabilities. In other words, AI security risks don’t only include threats against systems. They also incorporate AIs as tools at the service of cybercriminals.

In its report on Artificial Intelligence and Cybersecurity, ENISA gives an example of sophisticated cyberattacks, using malicious generative AI to generate deep fakes and manipulate information, voices, images, videos, and even faces.

But we also have to consider attacks that require less resources and knowledge. For example, using generative AI to create persuasive texts to attack people, companies, and institutions through social engineering techniques: phishing, smishing, spear-phishing… Or using AI to decide which vulnerabilities are more easily exploitable to attack an organization’s corporate systems.

Likewise, AI systems can optimize the efficiency and effectiveness of the malware used by cybercriminal groups in several key aspects: evasion of detection mechanisms, adaptation to changing environments, propagation, and persistence in the attacked systems…

Moreover, AI-based malware can employ learning techniques and improve their effectiveness, executing more successful attacks.

Just as AI systems are in full expansion, cyberattacks developed from this technology’s potential are also evolving rapidly. Thus, they will become more sophisticated in the coming years, and their potential impact on companies and citizens will be more significant. Therefore, Cybersecurity strategies must be strengthened, considering this new range of attacks.

AI security risks have become a major issue when it comes to cybersecurity.

3. Types of actors seeking to exploit AI security risks

What actors may seek to exploit AI vulnerabilities to meet their criminal objectives? ENISA has categorized malicious actors into seven typologies with varying characteristics and goals:

3.1. Cybercriminals

  • Cybercriminals. Cybercriminal groups have a clear objective: to profit. To gain economic benefits from their criminal activity, they can use AI systems to carry out attacks or attack these systems directly. For example, hacking AI chatbots to access sensitive information such as the bank details of a company’s customers.
  • Script kiddies. This class of cybercriminals lacks the knowledge to carry out complex attacks and design their malicious software, so they use packaged attack tools and pre-written scripts to attack corporate systems.

3.2. Actors threatening the social and economic system

  • Government actors and state-sponsored groups. Think, for example, of country-sponsored APT groups. These groups have many resources and extensive experience, allowing them to develop more sophisticated and complex attacks. Their objectives can range from attacking critical sectors and infrastructures of a country to destabilizing its democratic system by altering elections and sowing disinformation to stealing confidential information from companies and public administrations.
  • Terrorists. Cyberterrorists seek to cause direct damage to people’s lives, even causing deaths, for example, by sabotaging crucial infrastructures or sensitive sectors such as health. Terrorism has been a constant scourge throughout the 21st century and is now not only a security problem but also a cybersecurity problem.

3.3 Friendly fire and competition

  • Company employees and suppliers. People with access to critical AI elements such as models or data sets can intentionally sabotage systems, for example, by poisoning training data. In addition, they can also unintentionally cause security incidents by accidentally corrupting data.
  • Rival companies. Competition in the technology sector concerning Artificial Intelligence is increasing, so attacks may come from rival companies seeking to steal intellectual property or undermine the reputation of companies developing or employing AI systems.

3.4. Hacktivist

This concept mixes hacking with activism to refer to hostile actors whose motivation is essentially ideological and who seek to hack AI systems to highlight their vulnerabilities and risks.
The rise of Artificial Intelligence and the prominence of generative AI in recent times has accentuated the emergence of voices warning about the dangers of AI, not only in terms of cybersecurity.

4. How do AI risks differ from traditional software risks?

Artificial Intelligence systems are software, but with certain particularities that make them more complex and widen the attack surface of traditional software.

That is why the framework for managing AI risks developed by NIST compiles some of the new risks associated with the rise of AI. NIST also details other threats regarding the software we use daily but have been exacerbated.

Some of these risks are not directly related to cybersecurity. For example, the computational costs of AI development, the complexity of maintenance tasks, or the impact of these technologies on the environment.

However, the framework does address risks related to the security of systems, organizations, and users.

4.1. New and more complex cybersecurity challenges

  1. The data used in the construction of the AI system may not reliably represent the context or intended use of the system, and the quality of the data may impact the reliability of the AI, with negative consequences.
  2. The dependence of AI systems on the data used for training.
  3. Modifications during the training phase, whether intentional or unintentional, can alter the performance of the AI system.
  4. The data sets used during AI training may become obsolete when the system is deployed, affecting AI performance.
  5. The existing mismatch between the scale and complexity of AI systems and the conventional software applications that host them.
  6. Pre-trained models are crucial to facilitating AI research and developing high-performance systems in less time and at a lower cost. However, they can also increase levels of statistical uncertainty and cause bias and reproducibility problems.
  7. Multiple privacy risks as a consequence of AI systems’ enormous data aggregation capacity.
  8. It’s more challenging to perform security testing of AI-based software since AI code development differs from traditional code development, and questions may arise about what and how to test.

4.2. Implementing a security strategy for AI systems

The security risks specific to AI systems make it necessary for organizations to put in place strategies to manage cybersecurity and privacy risks in all phases of the AI lifecycle: design, development, deployment, evaluation, and use.

In this regard, cybersecurity services should be incorporated into companies’ security programs. Threat modeling, risk analysis, training of professionals, static and dynamic analysis, code analysis, pentesting, and Red Team exercises.

These tasks will strengthen the security, resilience, and privacy of AI systems to achieve the following:

  • Securing applications and IT infrastructure, hiding the model’s parameters to protect against attacks.
  • Strengthen the protection of new development pipelines linked to AI.
  • Properly manage bias issues in AI systems.
  • Address the risks associated with generative AI.
  • Address security issues linked to evasion, model extraction, system availability, and implementing Machine Learning attacks.
  • Analyze and monitor the complex attack surface of AI systems to detect attacks in the early stages of the Cyber Kill Chain and strange behaviors.
  • Consider the risks associated with AI technologies developed by third parties.

Artificial Intelligence research is in full swing. Technological innovations in this field, as well as software development, should serve to strengthen the reliability and performance of systems—as well as their security and resilience against the actions of hostile actors.

5. The security of AI, a significant issue in this era

The incorporation of Artificial Intelligence into the various productive sectors and the democratization of access to AI, with tools available to SMEs and not only to large companies, represents a new milestone in the technological revolution we have experienced in recent decades.

Therefore, AI security risks must be central to public debate and the heart of business strategies.

Successful attacks against AI systems can have catastrophic repercussions for the companies that develop them, but also for the companies that use them and the public: exfiltration of private data, disinformation, loss of reputation, legal consequences…

It’s therefore essential to address AI security throughout its lifecycle by implementing adequate security controls and conducting continuous security assessments.

5.1. Security-by-design and throughout the lifecycle

As ENISA’s guide on AI and cybersecurity points out, the security-by-design concept, widely used in software development, must be transferred to the AI arena.

How? By integrating cybersecurity controls, mechanisms, and best practices in the early stages of designing and developing AI systems and the applications and IT infrastructures that support them. Thus, the EU agency recommends:

  1. Conduct security testing and perform threat modeling to identify vulnerabilities, flaws, and attack vectors.
  2. Commit to secure coding practices and perform source code audits to detect bugs and vulnerabilities.
  3. Implement secure practices for data processing to ensure confidentiality and prevent data corruption or mining.
  4. Execute security tests to identify security problems early in the development process. Tests such as DAST are essential to manage dynamic cybersecurity risk and prioritize threats.
  5. Ensure that AI systems are designed transparently, and their behavior can be audited on an ongoing basis to detect anomalous behavior and correct it before it leads to security incidents.

In short, AI security risks must be addressed with the utmost efficiency and rigor by judiciously applying all the wealth of knowledge, best practices, tests, and methodologies designed by cybersecurity professionals in recent decades.

To this end, it’s essential to introduce security controls from the first phase of an AI system’s lifecycle, to have multidisciplinary teams in place, and to carry out comprehensive monitoring of the AI supply chain.

AI is essential to optimize cybersecurity services. And cybersecurity is the best shield to protect this technology from hostile actors.

More articles in this series about AI and cybersecurity

This article is part of a series of articles about AI and cybersecurity

  1. What are the AI security risks?
  2. Top 10 vulnerabilities in LLM applications such as ChatGPT
  3. Best practices in cybersecurity for AI