Deepfake Deep Dive
Actions for financial institutions to protect themselves and their customers from fraud
📅 August 27, 2025
📅 August 27, 2025
Artificial intelligence exponentially increases the volume, value, and effectiveness of fraud attacks. One clear example is “business email compromise” (BEC), also called “CEO fraud”.
A BEC or CEO fraud is a social engineering attack in which a fraudster assumes the identity of a trusted persona, such as the CEO of a company, to manipulate victims into taking a desired action. These actions could include transferring funds or handing over sensitive information. BECs target specific individuals and are often personalized to the victim, using information the fraudster collects during detailed research. AI-generated deepfakes and other content provide significantly enhanced tools to perpetrate fraud.
Arup is an engineering firm with 34 offices and 18,500 staff worldwide. In 2024, it was targeted by a CEO fraud which resulted in the loss of HK$200 million (US$25 million).
An employee in the finance team in Hong Kong received an email purporting to be from the Chief Financial Officer, who was based in the United Kingdom, requesting them to make several payments. Initially, the staff member was hesitant about the legitimacy of the email. The supposed CFO confirmed the payment instructions via a video conference.
The staff member made 15 transactions totaling $25 million to five local bank accounts. It was only later, when the employee checked with head office, that the fraud was identified.
So far, we’ve seen this use of AI-deepfake methodology used in frauds before.
But in this case – it was not just the CFO on the video conference.
There were multiple other staff present too.
The target of the fraud recognized many of their colleagues, and because of this they were persuaded of the legitimacy of the call with the CFO.
But all the staff were deepfakes.
During the investigation, police assessed that the attacker downloaded videos of the colleagues in advance and used AI to add fake voices.
In CEO frauds, AI deepfakes are used to target businesses, including companies, suppliers, and business partners. The same methodologies also can be used to create more personalized and credible romance scams, family/medical emergency scams, and other frauds.
Many financial institutions provide resources and awareness campaigns to help their customers avoid being targeted by fraud. These institutions gain a commercial advantage by strengthening their relationships with their customers by adding value beyond simply products and services. They also provide protection for the institution and financial system by reducing the likelihood that the institution will process fraudulent transactions.
There are other reasons institutions should consider enhancing their fraud awareness programs. Recognizing the increase in fraud and the greater resources available to institutions relative to their customers, some regulators hold financial institutions responsible if they do not have adequate measures in place to identify when their customers are being scammed.
The New York Attorney General commenced legal action against Citibank in 2024 for failing to protect and reimburse victims of fraud. The lawsuit alleges that Citibank does not adequately protect its customers from fraud losses because it:
It also misleads account holders about their rights and denies reimbursement if they are victims of fraud.
As an example, in October 2021 a customer received a text message that appeared to be from Citi and instructed her to log into a website or call her local branch. The customer clicked the link but did not provide any additional information. She called her local branch to report suspicious activity but no action was taken. A few days later, the customer identified that a fraudster had changed her banking password, transferred $70,000 between her accounts, and executed a $40,000 electronic wire transfer. None of these were consistent with her previous account activity. The customer repeatedly contacted the bank but was told her claim for fraud was denied.
In January 2025, Citi’s motion to dismiss the claim was denied and the legal action continued. The Attorney General stated that this “will allow us to continue our case against Citi to help those whose savings were stolen and ensure the bank follows the law to protect its customers.”
Weaknesses in fraud detection and prevention measures make institutions and customers more vulnerable to many types of fraud, including those enabled by AI. Enforcement actions like Citi also indicate regulatory expectations of those institutions towards customer protection.
The United Kingdom Financial Services Ombudsman ordered a bank to reimburse £100,000 ($134,000) plus interest to a customer who lost money due to fraud. The Ombudsman held the bank responsible because branch staff had indications their customer was a victim of a scam but did not take sufficient action.
The customer, “Nadia,” received a call from a person stating they were from the police. She was told that staff at her local bank branch had been stealing money from customers and that she needed to move all her money to a “safe account” to protect herself from fraud. Nadia made four transfers over four days of £25,000 (approx. $33,000) each to an overseas account specified by the “police officer.”
When Nadia made the transfers, bank staff realized they were outside her usual activity and asked her questions. Nadia explained that they were payments for a wedding, following instructions from the fraudster. She said that she was nervous and stressed but because she had been able to answer the questions, the bank staff followed her instructions.
The UK Financial Services Ombudsman identified that the bank had enough information to identify that Nadia could have been the victim of a scam, even though she had been able to answer the questions. Had staff asked more questions or called the police, the scam could have been prevented.
While this fraud did not involve the use of AI, it provides an example of how customers can be coached by fraudsters to pass bank checks. With the increasing use of sophisticated AI-enabled frauds, customers are increasingly vulnerable – and financial institutions can and must help.
Fraud intersects multiple departments within an institution. Many types of frauds are financial crimes or are closely associated: proceeds of fraud are often laundered, or frauds are committed to acquire funds for terrorism or other illicit activities. Fraud can also be seen as an operational risk as it may result from inadequate processes, systems, or human factors. And it closely links with cyber security.
To enable a more effective response to prevent and detect fraud, three key steps an institution should take are:
1. Establish a joint and coordinated approach between departments, such as stakeholder working groups that include representatives from all departments with fraud-related responsibilities. This ensures knowledge is shared, minimizing duplication and gaps.
2. Provide training to staff on how to identify fraud, customized to their role and focused on evolving fraud trends and the use of technology. For example:
3. Consider a customer fraud awareness program which is relevant to the institution’s customer base, including written resources, email updates, and other formats. For example:
Deloitte estimates that generative-AI enabled fraud losses will increase from $12 billion in 2023 to $40 billion in 2027. By taking more effective joint action, within institutions and with customers and communities, we can more effectively protect the financial system and each other from fraud.
Author
Catherine M. Woods is an Associate Managing Director at the Institute for Financial Integrity where she leads initiatives on countering cartels and Chinese Money Laundering Networks, illicit procurement networks and export controls, and emerging technologies including digital assets. For more information about our products and services, please contact info@finintegrity.org.
Join us October 9th to explore today’s fight against financial crime, featuring HSBC’s deployment of AI to enhance financial crime compliance.
Artificial intelligence holds the potential for significantly enhancing the effectiveness of counter-illicit finance programs – but only when operational, governance, tech, and human requirements are achieved.
HSBC has already delivered: implementing AI in real life and at scale throughout the lifecycle, from alert generation through to investigation and filing Suspicious Activity Reports.
This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.
Accept settingsHide notification onlySettingsWe may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.
If you do not want that we track your visit to our site you can disable tracking in your browser here:
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Google Map Settings:
Google reCaptcha Settings:
Vimeo and Youtube video embeds:
You can read about our cookies and privacy settings in detail on our Privacy Policy Page.
Privacy Policy