Lessons from the Flight Deck
How the Use of Automation in Aviation can Inform Artificial Intelligence in Financial Services
📅 August 13, 2024
📅 August 13, 2024
While the definitions of “artificial intelligence” and “automation” have yet to achieve global consistency – and in some situations the use of these words is blurred – for this article we will use the definitions below. We also define “machine learning,” which is closely related and particularly relevant in the financial crime compliance setting.
These three concepts are distinct but interrelated. Many businesses have implemented automation and are well-progressed with machine learning, although few have fully reached the level of artificial intelligence. Although less sophisticated, the successes and failures of automation are instructive on the business benefits, implications, and regulatory intentions as companies progress with the implementation of AI.
“Automated systems have been successfully used for many years, and have contributed significantly to improvements in safety, operational efficiency, and precise flight path management… [However] pilots sometimes over-rely on automated systems – in effect, delegating authority to those systems…”
– Federal Aviation Administration Flight Deck Automation Working Group report on Operational Use of Flight Management Systems
As we seek to improve efficiency and reliability throughout our life and work, automation – and its advancement into machine learning and artificial intelligence – provides significant benefits. For example, machines are:
However, we as humans remain responsible for the technology we create, and for its decisions.
In the aviation industry, failures to adequately manage automation can have life-critical impacts. Examples are the fatal outcomes on flights Air France 447, Lion Air 610, and Ethiopian Airlines 302, where automation produced outcomes unexpected by the human air crew, ultimately leading to a loss of the aircraft and the lives of those onboard.
In the financial services environment, the effects of flawed automation or AI may have significant, though more subtle, implications. The lessons learned on the flight deck can inform the implementation of advanced technologies in banking too.
“Other factors that affect the pilots’ decisions include the high reliability of the systems [resulting in] insufficient cross verification… [and] operational policies that direct the pilots to use automated systems over manual flying. Such policies are said to be preferred by some operators because the automated systems can perform more precisely while reducing pilot workload.”
– Federal Aviation Administration Flight Deck Automation Working Group report on Operational Use of Flight Management Systems
Perhaps counter-intuitively, the reliability and accuracy of machines can discourage humans from scrutinizing or challenging their outputs. We are then more likely to experience “automation surprises”, such as technology behaving in ways we do not understand or expect. The erosion of human situational awareness may lie hidden until a significant technological failure occurs.
Responsible principles for the use of artificial intelligence have been created in several jurisdictions as well as globally. For example, the United Nations Educational, Scientific, and Cultural Organization (UNESCO) published its Recommendations on the Ethics of Artificial Intelligence in 2022. The Recommendations recognize the “profound and dynamic positive and negative impacts of artificial intelligence (AI) on societies, environment, ecosystems and human lives… [but that they] can deepen existing divides and inequalities in the world, within and between countries, and that justice, trust and fairness must be upheld…”
Financial services regulators have made it clear that they consider management of AI-related risks to be a board level responsibility and that they will take action where companies do not meet their requirements.
Regulatory Responses in the United States and European Union
In the U.S., the Consumer Financial Protection Bureau issued guidance in 2023 about the legal requirements that lenders must meet when using artificial intelligence. “Lenders must use specific and accurate reasons when taking adverse actions against consumers… This requirement remains even if those companies use complex algorithms and black-box credit models that make it difficult to identify those reasons.”
U.S. Treasury has also explored opportunities and risks related to the use of AI, including concerns related to bias and discrimination in financial services and challenges with “explainability”. Explainability refers to the ability of human users to understand a model’s outputs and decisions, or how the model establishes relationships based on its inputs. Proposed amendments to the U.S. AML Act would require FinCEN to issue rules specifying the standards applicable for testing methods applicable to innovative approaches such as machine learning.
In the European Union, the Artificial Intelligence Act was approved by the European Council on May 21st, 2024, and it comes into force later this year. The Act bans some types of AI systems as creating an “unacceptable risk”, such as government-run social scoring. High-risk applications are subject to specific legal requirements. These include those used for automated processing of personal data to assess aspects of a person’s life such as their economic situation. It is likely that many applications of AI within banks, such as for use in credit scoring, will meet the definition of high risk and require additional controls.
In the United Kingdom, the Financial Conduct Authority took action against Amigo Loans Ltd for poorly implemented automation. While automation is not as advanced as AI, the action taken indicates regulatory intent and provides lessons for financial firms.
Amigo Loans Ltd (Amigo), a UK company, received a penalty from the UK Financial Conduct Authority was assessed as being subject to a fine of $94 million (£72.9 million) for failing to properly assess the affordability of its lending by relying heavily on a complex IT system with a high degree of automation. Amigo provided guarantor loans aimed at consumers who may have been unable to access finance from traditional lenders due to their circumstances or credit history. Both borrowers and guarantors needed to pass Amigo’s affordability checks for a loan to be approved.
The FCA found that:
In addition to issues with its use of technology, Amigo had failed to maintain adequate records of its historic business processes and had deleted email accounts of former staff, both of which hampered the FCA’s investigation.
The FCA held back on imposing the $94 million fine because the company demonstrated it would cause serious financial hardship and would have threatened Amigo’s ability to pay redress to its customers.
“At the heart of AI is data. The more complete the data on which an AI is trained, the more valuable will be the output. But just like human beings, if an AI has nothing on which to base its predictions, its output will be worthless.” – Forbes Technology Council
In the example above, Amigo’s technology solution used automation – a set of pre-defined rules – rather than having any AI capability to learn itself. As AI, including machine learning, introduces more “distance” between the human-directed inputs and the AI outputs, being able to demonstrate explainability of decisions is even more critical to meeting ethical and regulatory obligations.
Artificial intelligence offers advantages: machine learning algorithms can be designed to focus only on the variables that improve predictive accuracy rather than on the subjective factors influencing human decision-making.
However, AI algorithms and training datasets need to be carefully constructed to achieve these potential benefits. For example, if demographic categories are missing or under-represented in training datasets then models can fail to scale correctly and produce more errors. A well-known example of this is AI developed in 2014 by Amazon to rate job applicants. The technology was found to favor male applicants for software and technical positions: the models had been trained to score applications on patterns from the previous 10 years – almost all of them from men, reflecting a gender-skewed professional field. The initiative was cancelled in 2017.
One of the benefits of machine learning is that it can identify correlations and patterns in datasets that may not be detected by human analysts, such as indicators of fraud in customer and transaction data. However, datasets and algorithms should consider both “positive” and “negative” weightings to produce accurate results.
Case Study
For example, when a customer opens a new account at a bank, a Social Security Number (SSN) without credit history may be given a “positive weighting” as an indicator of fraud, meaning an investigator would consider it suspicious.
However, the time the person has been present in the United States should also be considered. If the time a customer has resided in the country is short, say a few weeks, this should contribute a “negative weighting” to an assessment of suspicious activity. This may indicate a person who has recently relocated and has received their SSN but not yet built up a credit history. Data on length of presence is available to the bank in the documents it collects during onboarding, such as identification documents (including their visa and its validity date) and address history (if the customer has previous held accounts in a different country).
Financial institutions have a responsibility to supervise their AI to ensure it is producing accurate results, particularly for less common (under-represented) client groups.
Financial institutions who use automation and AI responsibly will benefit from increased efficiency and accuracy. Those who do not will find themselves subject to regulatory scrutiny and investigation.
While the responsibility for implementation AI and automation extends from the board level of a business, in relation to financial crime compliance specifically, there are several actions a Chief Compliance Officer can take to fulfil their responsibilities when using automation and AI in compliance:
The integration of AI into financial compliance is becoming more crucial for managing the escalating complexity and sophistication of financial crimes. As the financial industry continues to evolve, AI presents opportunities to enhance financial crime compliance, reduce operational costs, and improve the accuracy and effectiveness of compliance programs.
Led by industry experts, this webinar explores how AI technologies can be integrated into traditional compliance frameworks, address the unique challenges presented by the digital transformation of financial services, and prepare financial institutions for the future of compliance.
This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.
Accept settingsHide notification onlySettingsWe may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.
If you do not want that we track your visit to our site you can disable tracking in your browser here:
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Google Map Settings:
Google reCaptcha Settings:
Vimeo and Youtube video embeds:
You can read about our cookies and privacy settings in detail on our Privacy Policy Page.
Privacy Policy