Financial organizations must improve ethical use of AI as Consumer Duty expectations ramp-up in July
Scott Zoldi, Chief Analytics Officer, FICO finds ample room for improvement at financial services firms amidst their surging use of artificial intelligence
With changes to UK consumer duty regulations coming in July, along with a main objective of improving consumer protection, organizations must prepare to use all the tools at their disposal to make sure these fresh expectations are met.
Increasingly, AI is being leveraged at financial organizations across the country. Balancing this increased use of AI is a need for AI governance tools as one of the most important organisational weapons that financial services and banking firms have in their arsenal to head off unfair customer outcomes. It becomes even more important as they scale their AI initiatives into new parts of their business, setting standards for model development, deployment, and monitoring.
FICO produces an annual report on responsible AI in collaboration with market intelligence firm Corinium, and this year highlighted some concerning results. The study was conducted amongst 100 banking and financial C-level AI leaders on how they are ensuring AI is used ethically, transparently, securely, and in customers’ best interests. Despite a booming appetite for AI, many organizations are yet to put in place a robust system capable of ensuring the ethical use of AI.
AI at Financial Services
As AI technology usage increases across financial services firms, it becomes crucial for business leaders to prioritize responsible and explainable AI solutions that provide tangible benefits to businesses and customers alike. In FICO and Corinium’s annual report, results show 81% of financial firms surveyed in North America have an AI ethics board in place.
The insight suggests that financial services companies are taking responsibility for detecting and correcting bias in their AI algorithms in-house. Only 10% currently rely on evaluation or certification from a third party.
Additionally, 82% of financial firms currently evaluate the fairness of decision outcomes to detect bias issues. 40% check for segment bias in model output and 39% have a codified definition for data bias. 67% of firms also have a model validation team charged with ensuring the compliance of new models. And lastly, 45% have introduced data bias detection and mitigation steps.
Maturing AI Understanding
There is a growing appreciation from banks and other financial services but more needs to be done to ensure the use of ethical AI at financial firms.
As AI strategies mature, more companies expand their use of AI beyond centers of excellence. At the same time, partnerships with vendors are making advanced AI capabilities accessible to companies of all sizes.
Corinium’s research also reveals that many financial firms are playing catch-up on responsible AI initiatives. 27% of organisations surveyed in North America are yet to start developing responsible AI capabilities and only 8% describe their responsible AI strategy as ‘mature’.
The case for further investment in and development of responsible AI initiatives in financial services is clear. Data and AI leaders expect responsible AI to drive better customer experiences, new revenue-generating opportunities and reduced risk. For this to take place, they will need to:
- Create model development standards that can be scaled and integrated with business processes
- Develop the means to monitor and maintain ethical AI model standards over time, for example by using blockchain
- Invest in interpretable machine learning architectures that can enhance explainability
Explainable or Predictive AI?
A cornerstone of AI ethics is the ability to explain a decision made by an AI or a machine learning algorithm. After all, how can you know if a decision is fair if you don’t know the parameters upon which it was made? This raises a conflict about what’s most important in an AI algorithm. Either its predictive power or the extent to which you can explain why it came to that conclusion.
Responsible AI requires the explainability of ‘black box’ AI algorithms. The more that can truly be seen through the process, the more trust can be assured. Many organizations, however, are struggling to find the reason for machine learning outcomes despite droves of explainable AI methods, many are simply insufficient to meet customer trust. Organizations are dropping poorly explained legacy methods in favour of exploring different AI model architectures, such as interpretable models that make transparent the relationships that the AI model learned and those that drive different customer outcomes, so they can explain to clients, test for bias, or remove from the model entirety. Interpretable machine learning algorithms put the financial institution in the driver’s seat of making decisions about what is allowed learned and leveraged in the AI model, versus using black box machine learning where any explanation can only be inferred and often incorrectly.
Monitoring AI Model Drift
As organizations have machine learning models making inferences, recognizing patterns and then making predictions, it is essential that organizations ensure that the model continues to be responsible and ethical in the light of changing data. It is not sufficient to just build a responsible AI model and let it run — it should be continually monitored that its outcomes continue to be responsible / ethical in production. This means that as data environments change, not only can the validity of predictions change over time, but so can the ethical use of the model. If an organization is going to have models, it must govern and monitor them to manage and justify their use in light of Consumer Duty.
In total, more than a third of companies surveyed said the governance processes they have in place to monitor and re-tune models to prevent model drift are either ‘very ineffective’ or ‘somewhat ineffective’. A lack of monitoring to measure the impact of models once deployed was a significant barrier to the adoption of responsible AI for 57% of respondents.
There’s no doubt that effective use of responsible AI will help optimize customers’ experiences and outcomes at every single step of their banking journeys. The list of real-time, real-world applications of AI grows longer every day. For example, fraud detection and personalization are just a couple of the many areas AI technology has improved.
While it seems that firms are being creative and efficient in their use of AI, responsible AI practices must be established in union to both develop algorithms and monitor the algorithms in place.
Uma Rajagopal has been managing the posting of content for multiple platforms since 2021, including Global Banking & Finance Review, Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune. Her role ensures that content is published accurately and efficiently across these diverse publications.