Table of Contents
Cybercriminals use deep fakes to execute a more sophisticated CEO fraud that is harder for the victim to detect
In 2019, Empresa Municipal de Transportes de Valencia suffered a €4 million fraud. How did it happen? A group of cybercriminals resorted to a critical social engineering technique of recent years: CEO fraud. Through the impersonation of identities via email and the execution of fake phone calls, the criminals managed to get the EMT’s director of administration to give the order to make up to eight transfers worth €4 million to make a supposed acquisition in China.
A year later, in 2020, during the coronavirus crisis, Zendal Pharmaceuticals was the victim of a CEO fraud worth €9.7 million. The operation of the criminals was similar. They impersonated the company’s CEO to instruct a financial manager to make transfers to make an acquisition. They also posed as professionals from KPMG, one of the Big Four in the world of global business consultancy, to send the deceived manager the payment orders and false invoices attesting to the transactions.
These two notorious cases are just a tiny sample of the severe economic, legal and reputational consequences of a company and its top executives being victims of CEO fraud.
Suppose this kind of cyber fraud already puts companies and their managers in check. In that case, the threat landscape becomes more complex if we add the consolidation of Artificial Intelligence to the equation.
Below, we will explore how the emergence of deepfakes can sophisticate the CEO fraud technique and make it more difficult for organizations to detect.
1. What is CEO fraud?
CEO fraud is simply a more sophisticated and ambitious version of the most popular social engineering technique: phishing.
The basic mechanics of this type of attack are as follows:
- A business target is set, and criminal intelligence is conducted to understand how the organization and the specific targets within the organization work.
- A fake email address is created to make it look legitimate, for example, incorporating the company’s domain and introducing minor variations that go unnoticed.
- An email is sent to a manager with the capacity and power to order large money transfers, posing as a hierarchical superior such as the CEO or CFO. This email lays the groundwork to justify the transaction. For example, acquiring another company, usually in a different country and with a foreign language.
- Different transfers are requested overtime to complete the transaction that justifies them. When CEO fraud is active, documentation, such as invoices or contracts, is generated and sent to the victim to give the scam the appearance of integrity.
- When the fraud is finally detected by the company or the banks it works with, the criminals disappear, and the money is moved through different countries to make it difficult to recover.
2. Why do these scams succeed?
Beyond this essential operation, malicious actors have been perfecting the CEO fraud to successfully defraud their victims by incorporating fake calls, as in the EMT case, or by impersonating various individuals and organizations, as in the Zendal attack.
As a result, CEO fraud has established itself as a top-level fraudulent activity. As recently as this year, Europol dismantled a French-Israeli cybercriminal group that had stolen $38 million in just a few days using this criminal methodology.
Although each criminal group has its methodology and the tactics, techniques and procedures of criminals evolve, all scams that use CEO fraud combine three elements that enable them to:
- Getting around the victim’s resistance and suspicion.
- Forcing them to act without diligence.
- Preventing the victim from interacting with other colleagues or even contacting his superior directly through an infallible channel.
2.1. Between seduction and deception
These three critical elements for successful CEO fraud are:
- Hierarchy. The order is given by a hierarchical superior, which is why many professionals, even if they hold managerial positions, do not proceed to question it, even if it seems strange to them, as happened in the case of the pharmaceutical company.
- Discretion. The messages emphasize the need for the manager to be discreet when dealing with financial transactions. Why? The alibi is a sensitive transaction, such as an acquisition or a key contract. Moreover, in a purely psychological derivative, being part of the small core of people who know about the transaction makes the victim feel essential and ignore the warning signs. This feeling of empowerment may even lead them to carry out negligent actions and overstep their functions, as occurred in the attack on the EMT.
- Haste. The seduction of the victim is completed with a classic element in social engineering attacks: haste. Payments must be made as soon as possible to prevent the operation from being frustrated.
3. What is deepfake?
In Face/Off, a cult movie from the Hollywood of the 90s, a policeman, played by John Travolta, puts on the face of a dangerous criminal, played by Nicolas Cage. Literally. Why? To impersonate him and catch his accomplices. Facial cloning has not gone down this fanciful path, but it has become a reality in our present thanks to deepfake.
Deepfake refers to fake images, sounds, or audiovisual files generated thanks to artificial intelligence tools. The evolution of generative AI has reached such a point that it is already difficult to discern whether a photo is accurate or has been generated with Artificial Intelligence. And it is already happening with videos as well.
Deepfake can have significant social and political repercussions. For example, let’s think about disseminating a fake video in which a candidate is seen committing a crime or making severe statements before an election. In fact, both in the war in Ukraine and in the conflict between Israel and Hamas, deepfakes have been disseminated by the different sides to manipulate international public opinion.
3.1. Deepfakes of people and against people
However, it is unnecessary to go to the significant armed conflicts to observe the impact of deepfakes in the world because they have already crept into the daily lives of thousands of people far from the public spotlight.
Our whole life is on the Internet. It is difficult for our digital footprint not to include videos, audio, or images that can be used to generate manipulated documents in which we are seen performing actions we did not commit; we are caught in places we were not in or uttering phrases we never said.
Malicious actors can use generative AI to perform deepfakes to help them achieve their criminal goals and commit fraud.
The ability to clone faces and voices represents a general revolution in implementing social engineering techniques and CEO fraud.
4. From email to the phone call: Cloning the voice of managers
Although Artificial Intelligence systems have become popular in recent years, they have long been employed for fraudulent purposes in the heat of generative AIs such as ChatGPT.
In fact, in 2019, a fraud was made public in which voice cloning was used. The director of the UK branch of a German energy company was the victim of a CEO fraud executed thanks to a deepfake. This senior manager received an alleged phone call from his boss, instructing him to transfer $240,000 to an account in Hungary. Although he found the request strange, he prioritized the hierarchical mandate.
What would have happened if the order had come to him via email instead of receiving a call, as in the classic CEO fraud? Would he have called his boss to confirm the transfer order? We will never know, but this case shows that voice cloning lends a patina of credibility to this kind of fraud against top executives.
After this case, others have followed over the years. For example, in 2021, a $35 million fraud came to light in which a senior executive had also been duped through a voice deepfake. Hence, public authorities, including Europol, have warned of the pernicious use of voice cloning to commit fraud and defraud companies and professionals.
5. One step further in CEO fraud: video calls
Voice deepfake can be very dangerous if used to defraud companies or individuals, but we are heading towards an even more delicate scenario: CEO fraud using a video call.
From the popularization of generative AI to today, it has been an extraordinarily short period. A year ago, most of the population did not know ChatGPT or DALL-E. However, in recent months, we have witnessed a continuous improvement in the results obtained thanks to these Artificial Intelligence systems.
So much so that it is no longer only possible to clone the voice but also to clone our faces and create audiovisual deepfakes.
How does this affect CEO fraud? It further complicates the detection of scams. Why? Perhaps a person is sufficiently aware to be wary of an order with financial implications received via email. Perhaps, in light of the increase in scams, a manager might also be mindful of a phone call, even if he knows the voice on the other end perfectly well. But how can a professional be suspicious of an order from his superior on the other side of a screen while he can see his face and hear his voice?
6. AI is also helpful for perfecting all the elements of deception
The improvement of CEO fraud thanks to AI is not only due to the use of deepfake, although this technique to fake videos, images, and sounds is mainly responsible for the evolution of this kind of scam.
Artificial Intelligence systems can also be advantageous when it comes to:
- Composing messages to victims, replicating, and even expressing themselves as impersonated persons.
- Replicating the corporate identity of the emails.
- Drafting the documentation used during the CEO fraud: payment orders, invoices, contracts…
Any small detail can trigger the suspicion of the CEO fraud victim and lead him to share his doubts with others, leading to the failure of the scam. Hence, unlike, for example, mass phishing, this kind of scam has to be taken care of to the millimeter. This entails a significant investment in resources and time on the part of the criminal groups, who also intend to make off with millions of dollars.
In this sense, artificial intelligence allows malicious actors to perfect their strategies and methodologies and reduces the time they have to devote to the design and execution of attacks.
7. Is this the real life? Is this just fantasy?
The improvement of deepfakes and the growing number of frauds have made it difficult to discern what is real and what is not. Hence, many people replay the first verse of Queen’s Bohemian Rhapsody: Is this real life or just fantasy? If a financial manager is called by the CEO of his company, with whom he has had a close relationship for years and whose voice and way of expressing himself he knows perfectly well, does he have to think it is a hoax?
The zero-trust philosophy has been present in the cybersecurity field for a long time. However, it has become more prevalent in recent years in the heat of the increase in cyberattacks and threats against companies.
The zero trust approach advocates, as its name suggests, approaching protection from a total absence of trust. This means, for example, complying with the principle of least privilege.
In the age of deepfakes, acting with distrust has become a central issue. However, can we be suspicious all the time and in all our interactions? It is essential to find the balance that allows us to act cautiously without questioning any communication we receive. Otherwise, companies would not be able to operate normally. Cybersecurity training plays a crucial role in this task.
8. Proactivity and innovation to protect companies and their managers
How can companies and their executives defend themselves against the CEO fraud technique perfected by deepfake?
It is essential to be proactive and to have cybersecurity professionals specialized in:
- Designing and executing social engineering tests to:
- Raise awareness of an organization’s professionals to the dangers of CEO fraud and other sophisticated techniques thanks to deepfake and AI.
- Assess the organization’s level of maturity against these frauds and take steps to increase protection.
- Cyber intelligence services such as counter-phishing to detect social engineering campaigns, as well as organizational risks and prevent frauds that may affect a company and its management positions.
- Threat-hunting services to continuously monitor the organization’s infrastructure and proactively detect targeted attacks such as CEO fraud.
- Red Team scenarios to simulate a CEO fraud attack using deepfake to assess the company’s detection and response capabilities.
In addition, cybersecurity experts must focus on continuous training, research, and innovation to adapt their services and strategies to the technological transformations associated with generative AI and the new TTPs of malicious actors.
8.1. Research and vigilance to prevent severe economic and legal consequences
In short, CEO fraud is an attack technique that has generated substantial economic losses for numerous companies worldwide. Moreover, this type of targeted attack may become one of the biggest threats in the coming years thanks to generative AIs that allow criminals to perfect messages or clone voices and open the door to making fake video calls that may be undetectable by victims of the attack.
That’s why it’s critical to have cybersecurity experts conducting continuous monitoring and helping companies establish legitimate and trustworthy communication processes.
At stake are not only economic frauds that can run into the millions and seriously affect the solvency of companies but also the legal liability for a successful CEO’s fraud.
Let’s return to the actual case with which we opened this article. Justice is currently determining whether, beyond the cyber criminals, there is also an accounting and criminal liability of the executive who was the fraud victim and the bank that processed the payments. The board has been sentenced to pay the 4 million euros defrauded as it is considered directly responsible. This is a sentence that shows the severe repercussions that CEO fraud can have for both companies and their managers.
This article is part of a series of articles about AI and cybersecurity