Serbia EN
Data Ethics and the Use of AI/ML in Financial Services: How to mitigate risks

Data Ethics and the Use of AI/ML in Financial Services: How to mitigate risks

Tim Jennings

Senior Director Global Data Practice Lead , London, UK

Data

Financial Services (FS) firms have widely adopted Artificial Intelligence (AI), and most expect to develop this further. The number of Machine Learning (ML) applications is predicted to increase by 3.5 times over the next three years.

As well as the undoubted benefits, this creates novel challenges and new risks for clients, employees and accountable managers. These risks are real and present, and FS firms must ensure mitigations are already in place to avoid inevitable client, reputational and regulatory repercussions if, and when, things go wrong.

The oversight of AI/ML in financial services typically relies on familiar model risk management approaches. This may be appropriate for trading decision making but it fails to recognize the breadth of use cases where AI/ML could be deployed, and the complexity and potentially personal nature of such cases, for example in HR, KYC, or about individual cases in the retail banking sector.

The Bank of England regulatory authorities (PRA and FSA) are sufficiently concerned that they have issued a discussion paper (Oct 2022) exploring whether additional regulation is needed to address the unique risks of AI/ML in Financial Services. They have also established the AI Public-Private Forum to maintain the dialogue in this area.

Mitigating risks
This paper is aimed at FS firms looking to mitigate the risk of inadvertent harm arising from poor AI/ML practices and governance. We examine the behaviours and risks resulting from poor practice and explore the building blocks of good practice. We will use case studies from the FS environment where available, and also draw on examples from other fields. We will illustrate how the risks can lead directly to ‘harms’ arising from the poorly devised application of AI/ML technology.

We conclude by looking at how organizations can construct a process-based governance framework to mitigate these risks.

Identifying potential areas of harm from the use of AI/ML, and the underlying causes
Of crucial importance, there are recurring analytical or process problems which give rise to inaccuracy, error, bias, or exposure. These problem areas then become the target of governance policy and execution when designing an effective AI/ML control and risk mitigation framework.

What do we mean by ethical ‘harms’?

The Turing Institute identifies the following potential harms caused by AI Systems:

  • Bias and discrimination
  • Denial of individual autonomy, recourse and rights
  • Non-transparent, unexplainable, or unjustifiable outcomes
  • Invasions of privacy
  • Isolation and disintegration of social connection
  • Unreliable, unsafe or poor quality outcomes

Other AI ethics frameworks define outcomes/harms in broad generic terms: beneficience, non-malficence, fairness, equality, and so on. As we will see, it is not always possible to measure the performance of an analysis solution against these dimensions, and so we prefer the Turing definitions for this conversation.

We have correlated the Turing Potential Harms against underlying analytics risks in the below chart.

The rest of this section explores each of these risks in greater depth to explain the problem and why it must be controlled. We assume no existing knowledge of AI/ML.

Figure 1. Potential Harms

Corrected Table Graphics

Potential source of ethical and regulatory risks #1:

Transparency, or not understanding why your AI generates the results it does

Financial services models might be tasked with approving/rejecting loans, assessing credit risk, assessing insurance risk, assembling investment strategies, etc. It is essential to the credibility of the FS organisation that it can explain the decisions or recommendations that are made. If you cannot explain the results of your model, then how can you be confident of the quality of the outcome? Further, if you cannot explain why -- for instance when someone is turned down for an account or a loan -- can you be sure you are respecting their autonomy, recourse and rights?

Use of AI/ML systems may obscure the basis of financial decisions, and they may make decisions that are unreliable in a variety of ways.

We can start with a couple of quick, fun examples of AI/ML getting things wrong to remind us that these systems are essentially ‘dumb’. Machine Learning was trained to identify skin cancers from photographs. Unfortunately, the ML solution identified that there was usually a ruler in the training data picture when the subject was genuine cancer. So it learned the correlation between a ruler and the cancer diagnosis, and there was no practical use for diagnosing the condition from x-rays. There are similar examples: AI misidentifying a hillside as ‘sheep’ without any animal in the picture, or another drawing the distinction between ‘dog’ and ‘wolf’ based purely on whether there was snow in the background. Things that are obvious to humans are not evident to an AI/ML correlation engine.

There are richer examples from companies that you’d expect to be leaders in this field. With some fanfare, Google developed the ‘Flu Trends’ solution which analysed the regional query patterns received through its search engine and was ostensibly able to identify the rise and spread of flu. This worked well for a while, but then appeared to lose its sense of direction. Google engineers investigated and discovered the solution was actually tracking the onset of Winter based on the correlation with seasonal search terms (college basketball, etc.) rather than flu symptoms. So, as long as the flu outbreak occurred where it might be expected, at the start of winter, the model worked. But an outbreak that occurred out of season was largely missed, and it overestimated cases when the onset of seasonal flu was delayed. Imagine if public health coordination was dependent on such a model and the issues that might cause. [continues]

 

To read the full article, fill in the form below and download the PDF.

 

 

Enter your details to download this article for free

CAPTCHA
Image CAPTCHA
Enter the characters shown in the image.
Yes, I would like to receive marketing communications regarding Synechron services and events.
I have read and agree to Synechron's Terms and Conditions and Privacy Policy .