The FCA wants to enable the safe and responsible use of AI in UK financial markets. It is carrying out research on transparency and the potential role of AI explainability methods.
As part of this, the FCA has published a research note on AI's role in credit decisions. The research note concentrates on the issue of AI explainability against the background of algorithm-assisted decision-making. It used consumer credit decisions as a case study and considered if participants were able to identify errors that had been caused by incorrect data used by the algorithm or by flaws in the algorithm's decision logic itself.
The study found that consumers welcomed extra information about the inner workings of the algorithm, and they had more confidence in their ability to disagree with the algorithm's decisions. However, it also showed that more information may not always be helpful for decision-making and could lead to worse outcomes for consumers by having an adverse impact on their ability to challenge errors.
The findings highlight the value of testing accompanying materials that may be provided to consumers when explaining AI or algorithmic decision-making. They also emphasise the importance of testing consumers' decision-making within the relevant context, rather than relying solely on self-reported attitudes.
In January, the FCA published a note about bias in natural language processing. It found that while it is possible to measure some aspects of language bias and mitigation techniques can remove some elements of gender and ethnicity bias, there are limitations to current methods. It also examined research on bias in supervised machine learning and discovered:
- Data issues arising from past decision-making, historical practices of exclusion, and sampling issues are the main potential sources of bias.
- Biases can also arise due to choices made during the AI modelling process itself, such as what variables are included, what specific statistical model is used, and how humans choose to use and interpret predictive models.
- In reviewing technical methods for identifying and mitigating such biases, literature suggests these methods should be supplemented by careful consideration of context and human review processes. However, technical mitigation strategies may affect model accuracy and could have unintended consequences for model bias on other groups.
The FCA has said that it welcomes further research to explore approaches to explaining AI-assisted decisions in other contexts within financial services, the specific mechanisms for how explainability methods may affect consumers, alternative ways of presenting explanation genres, and the broader consumer journey beyond recognising errors.
