Cybersecurity blog header

Cyberattacks against AI chatbots. A new threat for businesses

Cyberattacks against chatbots are one of the latest trends in cyber threats

Cyberattacks against AI chatbots are a growing threat to companies that use this technology in commerce or customer service

123456. This was the username and password for the administrator account of the AI chatbot used by 90% of McDonald’s franchises to manage job applications from millions of people.

In addition, the chatbot suffered from an IDOR (insecure direct object reference) vulnerability in an internal API. Both deficiencies allowed researchers to obtain information on 64 million job applications, including names, emails, addresses, availability to work, and personality test information that applicants were required to complete.

This news enables us to focus on cyberattacks against chatbots, a technology increasingly used by companies across various industries to manage critical processes, such as product sales and customer service. AI-based conversational chatbots are capable of handling conversations with users and automating administrative tasks.

In fact, the rise of Artificial Intelligence enables companies to develop or hire increasingly accurate and efficient AI chatbots, capable of successfully responding to users and completing processes such as selling a product, resolving queries, or making reservations.

For all these reasons, chatbots are poised to enhance business profitability and automate roles responsible for performing repetitive tasks.

The advantages of implementing AI chatbots are obvious. Still, the security flaws detected in McHire, McDonald’s hiring suite developed by Paradox.ai, highlight the need to strengthen the cybersecurity of this technology to prevent cyberattacks against chatbots.

1. Why AI chatbots will proliferate in the coming years

Do you use ChatGPT, Gemini, or other generative AI in your daily life? We are going to take a leap and predict that, in most cases, the answer will be yes.

Artificial Intelligence is not a fad; it is here to stay, and its relevance in the daily operations of companies will continue to grow, fueled by technology that is improving at an unprecedented rate.

In some cases, AI chatbots are used internally. In this way, it is the professionals themselves who interact with them to resolve doubts, obtain data, or automate tasks.

However, as we mentioned earlier, many companies already use AI chatbots whose mission is to interact with users outside the organization:

  • Customer service via telephone, instant messaging apps, or websites.
  • Digital sales assistants to answer customer questions and advise them during the purchasing process.
  • Appointment booking tools, essential in certain sectors, such as healthcare.
  • Solutions focused on human resources management, for example, to process job applications.

The benefits of AI chatbots for companies are obvious:

  • Automate tasks that previously required human intervention.
  • Enable personalized and continuous customer service.
  • Resolve customers’ and prospects’ doubts and offer them services and products that suit their interests or needs.
  • Drive the generation of opportunities and sales through the digital channel.
  • Facilitate the management of opportunities and reduce the time it takes to convert them into sales.
  • Streamline and systematize processes related to human resources management, such as job applications and hiring procedures.
  • Reduce companies’ salary costs and increase their profitability and productivity.

2. What are the main risks of cyberattacks against AI chatbots?

However, the greater the relevance of chatbots and the amount of information they work with, the greater the risks associated with cyberattacks against chatbots. What are the main threats they face? Based on the Top 10 vulnerabilities in LLM (large language model) applications, we can highlight the following:

  • Direct and indirect prompt injection. This allows criminals to access confidential data, manipulate decision-making processes, and even take control of the chatbot.
  • Insecure output handling. It is essential to sanitize the chatbot model’s responses and encode the output that reaches users interacting with it. Otherwise, the company is exposed to XSS attacks and, to a lesser extent, remote code execution on backend systems or privilege escalation by attackers.
  • Denial-of-service attacks. The goal of these cyberattacks against chatbots is to disrupt their operation and cause companies to invest more resources in maintaining them. This can be critical, especially in sales and customer service processes.
  • Exfiltration of sensitive information from the company, its customers, or other individuals. The case discussed at the beginning of this article illustrates that cyberattacks against chatbots can lead to the disclosure of personal data, but other sensitive information can also be obtained, such as data on a company’s business strategy or even its industrial and intellectual property.
  • Insecure plugin design. When interacting with users, chatbots make automatic calls to plugins. It is essential that:
    • Malicious actors cannot use an incorrect implementation of the plugin system to compromise the service.
    • The system cannot use malicious plugins that can be used to compromise it later.
  • Giving the chatbot excessive autonomy. If a chatbot performs too many actions and has excessive permissions, and its security is compromised, it could open the door for malicious actors to carry out harmful actions that affect the company. This usually comes hand in hand with agents (systems that implement the MCP – Model Context Protocol).
Cyberattacks against chatbots can undermine sales processes and lead to the leakage of personal data

3. Security from development and throughout the lifecycle is essential against cyberattacks on chatbots

How can we address the identified risks and prevent or minimize the impact of cyberattacks on chatbots? Companies developing chatbots must implement a security policy from the design stage that prevents vulnerabilities that malicious actors can exploit. What should this policy include?

  • Threat modeling.
  • Secure coding practices and source code audits.
  • Secure data handling practices.
  • Security testing, such as DAST, is used to identify vulnerabilities.
  • Transparent design of chatbots so they can be audited on an ongoing basis.

In addition, it is essential to continuously monitor the security of chatbots for weaknesses and implement improvements in security mechanisms to address prompt injection attacks or safeguard the information of individuals and companies that manage chatbots, enabling them to perform their assigned tasks effectively.

4. Companies must focus on the security of third-party software

Some large companies develop their chatbots internally, but it is more common for them to contract solutions developed by specialized companies, as demonstrated by the case of McDonald’s.

In these cases, companies must bear in mind that cyberattacks against chatbots developed by third parties can be very costly.

One of the major constants in recent years in the field of cybersecurity has been the prevalence of supply chain attacks and security incidents originating from a successful attack on a supplier.

When designing a company’s security strategy, auditing its technological infrastructure, and identifying vulnerabilities to address them before they are exploited, it is crucial to consider all the software and hardware used by the business.

To prevent cyberattacks against chatbots or limit their impact on a company, it is necessary to design and implement measures that reduce the risks listed above and allow companies to anticipate malicious actors.

After all, it is clear that if AI chatbots are going to become increasingly relevant in companies, performing crucial tasks such as managing sales processes and having access to confidential information, hostile actors will target them for attack.

AI pentesting helps combat attacks against this cutting-edge technology

5. The consequences of cyberattacks against AI chatbots

The scope of the consequences of cyberattacks against chatbots depends on the role they play in the company and the information they have access to. That said, incidents in which the security of a chatbot is compromised can cause:

  • Financial losses. For example, if a customer service chatbot needs to be taken offline to deal with an attack, the company will need to reinforce that service by hiring additional personnel. Similarly, if a sales chatbot fails to work or works incorrectly, customer acquisition will be significantly impacted. Another scenario that may arise is that attackers gain access to critical information about the company’s business activity. To all this, we must add the cost of resolving the incidents.
  • Reputational damage. Suppose malicious actors manage to undermine the functioning of a chatbot or obtain confidential information about users who interact with it. In that case, the company will suffer severe damage to its reputation.
  • Penalties for undermining personal data protection. In the European Union, companies are required to implement the necessary technical and organizational measures to ensure the security of personal data. Violating this obligation is punishable by heavy fines.
  • Financial compensation for damages, in the event that the exfiltrated information is used to commit digital fraud against citizens and companies.

6. Cybersecurity services to combat cyberattacks against chatbots

How can companies that develop and market chatbots, as well as those that purchase them, strengthen their security?

Cybersecurity services are essential when it comes to preventing and responding to cyberattacks against chatbots:

  • Source code audits to detect security flaws in chatbot code and strengthen their security from the design stage.
  • Continuous security audits of chatbots to identify exploitable vulnerabilities and anomalous behavior, as well as security audits of exposed APIs.
  • AI pentesting simulates cyberattacks against real chatbots in controlled environments to identify weaknesses and mitigate specific vulnerabilities in AI systems, such as prompt injection, data exfiltration, or malicious code execution.
  • Threat Hunting to proactively identify undetected compromise scenarios and the most innovative attack techniques against constantly evolving technology.

In short, AI chatbots are becoming increasingly important in companies, and all indications suggest that their use will be widespread throughout the productive fabric in the coming years.

To fully leverage the benefits of this technology, it is crucial to enhance its security measures and prevent malicious actors from launching cyberattacks against chatbots that compromise a company’s business continuity, finances, and reputation.