Cybersecurity blog header

The Risks of Using Generative AI in Business: Protect Your Secrets

Risks of using generative AI in business include disclosure of trade secrets and intellectual property theft

Companies developing AI systems and those contracting third-party applications should know the risks of using generative AI in business

2023 was the year that generative AI went mainstream. Thousands of companies and professionals started using ChatGPT, Bing AI, or Copy.ai to streamline numerous daily activities. The widespread use of this critical technology of our time brought to light the risks of using generative AI in business, such as the exfiltration of business information or the theft of intellectual and industrial property.

While AI opens up a wealth of opportunities to strengthen companies’ cybersecurity posture, it also brings with it a number of risks, such as using generative AI to perfect social engineering campaigns or automate malware attacks.

In addition, there are the risks of using generative AI in legitimate but insecure ways in companies, for example, by using a tool such as ChatGPT to check an application’s source code for bugs, as happened to Samsung, one of the world’s largest technology development companies.

As a result of this insecure practice, the code became part of the AI system’s training data and was stored on the servers of the company that developed it, OpenAI.

What would happen if a malicious actor launched a successful attack against ChatGPT or OpenAI’s servers? Samsung immediately limited the use of these systems so that reality would not answer this question in the form of a security incident.

Below, we will address some of the risks of using generative AI in business, and the role cybersecurity plays in enabling companies to take advantage of this technology safely.

1. The consequences of attacks on AI models

Software supply chain attacks have become one of the most worrying trends in cyber security. The same can be said of AI security risks, which include both malicious actions against these systems and the use of AI applications to optimise criminals’ techniques, tactics and procedures.

Attacking AI models employed by hundreds of companies hybridises both threats. Thus, attacks against AI systems can affect not only the companies that develop them but also those that use third-party models.

1.1. Disclosure of secrets and intellectual property

Why is it dangerous to introduce trade secrets and information linked to a company’s intellectual property through a prompt in an AI system?

Malicious actors can launch attacks such as:

  • Membership inference. Criminals perform data logging and black-box access to the attacked model to determine whether a particular record was part of the model’s training data set. This type of attack can obtain confidential and particularly sensitive information about companies and citizens.
  • Model inversion or data reconstruction. One of the most sophisticated attacks against AI models is the inversion of the models themselves. How? By interacting with the model, malicious actors can estimate its training data and thus breach the confidentiality and privacy of the information.

Intellectual property theft has a very high economic cost and can seriously damage a company’s market position. It also results in a loss of competitive advantage.

1.2. Exfiltration of business data and customer information

Another significant risk of using generative AI in business is the possibility of malicious actors obtaining confidential data about the companies themselves or about their customers, employees, or partners.

As with intellectual property, if prompts containing data about customers or strategic business issues are run in an AI application, criminals can perform membership inference or model inversion attacks to get the information.

We should also remember that, in terms of intellectual property theft and exfiltration of sensitive information, the servers on which the data of AI systems is stored can be attacked.

LLM development companies must undertake safe development to prevent the risks of using generative AI in enterprises

1.3. Errors due to malfunctioning of AI systems

Since ChatGPT became a popular tool in the public eye, more than a few people have tried to test the limits of generative AIs, for example, to find flaws in their logical reasoning.

In some more extreme cases, users have detected anomalous behavior from systems such as Bing AI, to the extent that the AI claimed to have spied for Microsoft workers through their laptop webcams.

In addition to these incidents, there are the consequences of attacks against the models that seek to undermine their operation:

  • Data poisoning. Attackers sabotage an AI system by modifying the data it uses to train itself.
  • Input manipulation. Another kind of attack against an AI model is manipulating the system’s input data. How? By injecting prompts.
  • Supply chain attacks corrupt a base model that other AI systems use to perform transfer learning.

1.4. Legal issues related to data protection

Since adopting the General Data Protection Regulation (GDPR), the European Union has had a strong legal framework to protect the privacy of individuals’ data.

If a company discloses information about its customers, employees, suppliers or business partners to a generative AI owned by another company, it may be in breach of the existing rules.

Moreover, suppose an AI model is successfully attacked. In that case, it can lead to the exfiltration of private data, which can lead to legal consequences, fines for violating European rules and damage to the credibility of the company whose professionals provided private information to the attacked AI.

2. What are companies doing to mitigate the risks of using generative AI in business?

After it became public that up to three Samsung employees had disclosed proprietary intellectual property and confidential corporate data to ChatGPT, many companies acted immediately to limit or ban the use of AI developed and managed by third parties while accelerating the design of their own models.

2.1. Limiting or banning the use of AIs

Large technology companies such as Apple or Amazon, global financial institutions such as JPMorgan, Goldman Sachs, or Deutsche Bank, telecommunications companies such as Verizon, and retail organisations such as Walmart implemented protocols last year to limit their employees’ use of generative AI.

These internal policies aim to mitigate the risks of using generative AI in business by fast-tracking it—that is, through restriction rather than training and awareness-raising about the risks of using generative AI inappropriately in enterprises.

In this respect, large global companies are following the lead of many educational institutions by banning the use of natural language modelling applications to prevent students from using them, for example, to produce assignments.

Strengthening the security of generative AI is key to the future of this technology and its use in the enterprise environment

2.2. Developing proprietary language models

At the same time, larger and more technologically advanced companies have opted to design their own AI systems for internal use.

Is the sensitive information fed into the AI application 100% secure in these cases? This data will be protected in the same way that the AI system itself is protected. In fact, companies must move to consider the AI architecture as a new attack surface and put in place specific security controls to protect the data in the language models.

What are the essential aspects that companies developing their own AI systems need to consider?

  • Auditing the AI code for bugs or vulnerabilities, mitigating them, and implementing secure development practices are essential.
  • Secure the AI supply chain:
    • Perform an exhaustive control of all AI supply chains (data, models…).
    • Build and update a software bill of materials (SBOM), including AI application components, dependencies and data.
    • Audit technology providers.

3. Cybersecurity services to minimise the risks of using generative AI in business

In light of the risks of using generative AI in business outlined above, many managers and practitioners may ask: What can we do to protect against cyber-attacks on AI models and reduce the risks of using generative AI in business?

Both companies developing AI and those using systems designed by third parties need to adapt their security strategies to include these threats and have comprehensive cybersecurity services in place to prevent risks, detect threats and respond to attacks.

3.1. From secure development to incident response

  • Conduct secure development of AI systems from design and throughout their lifecycle. How to do it?
  • Ensure that AIs can detect attacks and reject prompt executions. For example, an attack has shown that it can induce unwanted behavior in applications such as ChatGPT or Gemini. How? By using ASCII art to introduce prompts into the models that cannot be interpreted semantically alone. How can this be avoided? By fine-tuning and training agents to detect such hostile practices.
  • Manage vulnerabilities and detect emerging vulnerabilities to mitigate risks before an attack occurs, including in AI supply chains.
  • Design and execute specific Red Team scenarios on attacks against AI systems.
  • Have a proactive incident response service that can quickly contain an attack.
  • Implement training and awareness programs on generative AI for professional and business purposes so workers can use this technology without exposing critical information.

In short, Artificial Intelligence is already essential to many companies’ day-to-day operations. The risks of using generative AI in business are apparent. Still, they can be successfully addressed by designing a comprehensive cybersecurity strategy to detect and mitigate them so that companies can safely benefit from the advantages of this disruptive technology.

More articles in this series about AI and cybersecurity

This article is part of a series of articles about AI and cybersecurity

  1. What are the AI security risks?
  2. Top 10 vulnerabilities in LLM applications such as ChatGPT
  3. Best practices in cybersecurity for AI
  4. Artificial Intelligence Fraud: New Technology, Old Targets
  5. AI, deepfake, and the evolution of CEO fraud
  6. What will the future of AI and cybersecurity look like?
  7. The Risks of Using Generative AI in Business: Protect Your Secrets