• Risk
  • Blog
  • Global Research and Risk Solutions
  • Banking
  • Rachna Maheshwari
  • Data Centre
April 19, 2023

AI and ML in Finance

Opportunities and caveats

by Rachna Maheshwari, Associate Director - Model Risk Management
CRISIL Global Research and Risk Solutions

AI/ML adoption in banking and finance

 

Rapid advancements in artificial intelligence (AI) and machine learning (ML) algorithms present exciting opportunities for innovative applications across industries. The financial services industry is also leveraging these technologies to enhance customer experience, facilitate quicker service delivery, increase revenue generation potential, and promote prudent financial and risk management practices.

 

Extensive use of allied technologies such as big data, data management, high performance computing and cloud computing has provided an impetus to the adoption of AI/ML in the industry. In a 2020 survey conducted by the World Economic Forum on a random sample of financial institutions, 77% of respondents believed AI would be of high or very high importance to their organisation in coming years.

 

A more recent 2022 survey of 500 financial services professionals conducted by Nvidia, established that about 78% of financial companies use machine learning, deep learning, or high-performance computing. Capital markets firms such as hedge funds, asset managers, and exchanges were most likely to use deep learning at 58%. Whereas, about 80% of fintechs reported using machine learning.

 

Not surprisingly, use cases have burgeoned over the years, from customer onboarding and know-your-customer processes to financial crime detection, risk management and consumer analytics across channels.

Illustrations of Applications of AI & ML in financial Services

Caveats

 

There are many advantages of applying AI/ML in financial institutions, notwithstanding, it is essential to be aware of the various caveats, pitfalls and considerations that come with the use of these models and algorithms, including:

 

  • Bias: AI/ML algorithms are models that use historical data to recognise patterns and make predictions. However, if there is any bias, discrimination, or under-representation of demographic groups in the historical data, applying an AI/ML algorithm without any parameter restrictions or tuning may result in biased predictions.

    In addition, algorithms are being trained with data from various sources. This includes data obtained from customers' computers and handheld devices, such as device usage, make, and contact names. Such data is often highly correlated with customer demographics and using it can produce biased and unfair credit decisioning, fraud alerting, and scoring outcomes.

    This concern is particularly relevant for the financial services industry when AI/ML algorithms are applied on client data within a line of business that involves direct client interaction, or on employee/human resources data. Potential bias in algorithms used for trading, pricing instruments or economic forecasting is less likely to result in a regulatory or legal breach.
  • Explainability and interpretability: It is difficult to explain the business intuition behind the specification, feature selection and structure (estimated weights/activation functions/hyperparameters) of complex AI/ML models such as multi-layered, convolutional neural nets, decision trees with large depth, and random forests with many trees.

    An alternative approach is to use inherently interpretable models, such as linear models, scorecards, simple discrete outcome statistical models (e.g., logistic regression). This class of inherently interpretable models also permits marginally complex specifications such as generalized additive models, variations of decision trees, and clustering techniques like k-nearest neighbours.

    Local post-hoc explanations and global post-hoc explanations can be provided for complex AI/ML model applications.

    Local post-hoc explanations include feature importance, counterfactuals, and example/observation-specific predictions. Specific examples are:
    • LIME: Local interpretable model agnostic explanations, which are observation specific.
    • SHAP: Shapley values used to quantify the marginal importance of each feature.
    On the other hand, global post-hoc explanations include collection of local explanations, and summaries of counterfactuals.

    Effective communication of these explanations to business leadership, regulators and auditors requires AI/ML domain knowledge, a deep intuitive understanding of these models, and the ability to address the specific and varied concerns of various stakeholder groups.
  • Breach of data privacy: The unauthorised use of individual customer/employee data collected from handheld/computing devices, recorded data from image recognition devices, and data from other sources (e.g. web scraping) may result in a breach of data privacy and confidentiality laws across multiple jurisdictions globally.
  • Risk management for AI/ML models: In banks and large financial institutions, all the teams involved in risk modelling must possess adequate experience and knowledge of AI/ML methods before scaling up their application. These teams include model developers/users/owners, model risk management, model governance, and model audit.
  • Model implementation accuracy, input data management and quality: It is crucial to develop a robust data management infrastructure and ensure the accuracy of data and model implementation. That said, this can be challenging when model specifications are complex and large volumes of structured and unstructured data from diverse sources are used for training the models. Adequate implementation checks and execution controls should be in place when using these models.

    Neglecting any of these aspects of model implementation may lead to regulatory violations, reputational risks, and monetary costs such as legal fines. For instance, a software glitch in the implementation of a financial crime detection algorithm at a large European bank led to the failure to flag suspicious and illegal transactions to law enforcement authorities. (Financial Times, 22nd March 2019)

 

In conclusion, we believe financial institutions have significant opportunities to reap the benefits of AI/ML algorithms. However, ample forethought, prudence, and caution should be exercised for risk-free and beneficial use of these algorithms.  

 

References

  • “Regulating AI in Finance: Putting the Human in the Loop”, Ross P Buckley, Dirk A Zetzsche, Douglas W Arner and Brian W Tang, Sydney Law Review, 2021.
  • “Powering the Digital Economy: Opportunities & Risks of AI in Finance”, El Bachir Boukherouaa and Ghiath Shabsigh, IMF paper, October 2021.
  • Article in Financial Times by Olaf Storbeck, 22nd May 2019.
  • “Forging New Pathways: The next evolution of innovation in Financial Services”, World Economic Forum Report, 2020.
  • “Interpreting ML Models: State-of-the-art, Challenges, Opportunities”, training materials by Hima Lakkaraju, Harvard University.
  • “State of AI in Financial Services 2022 Trends”, Report by Nvidia, 2022.