top of page
  • Writer's pictureDeborah

Published on February 17, by the Bank of England (BoE) and the UK Financial Conduct Authority (FCA), the Final Report aims to foster a common understanding of the use of artificial intelligence (AI) in UK financial services and gather stakeholders' views on the safe adoption of these technologies.


The Final Report outlines AI Public-Private Forum (AIPPF) that focuses on the barriers to AI adoption as well as the challenges and risks related to these technologies.

Some of the key findings related to data, model risk and governance are summarized below.


Data


Data quality processes can pose some challenges resulting from their sources as well as the structure used throughout the AI model lifecycle. Furthermore, the value firms put on data can determine their business model and by ricochet their business strategy. Another finding, is the necessity for firms to include AI-specific elements in their governance, risk and privacy frameworks and ensure to take a collective responsibility approach.


Some of the best practices proposed regarding data management include:

(i) Considering the adoption and use of AI holistically, harmonizing, where possible, data and model processes(ii) Having processes in place for tracking and measuring data flows within, as well as into and out of, the organization

(iii) Carrying out regular data audits and assessments of data usage

(iv) Having a clear understanding and documentation of the provenance of data used by AI models

(v) Having a clear understanding of the limitations and challenges of using alternative and/or synthetic data


Model Risk


The Final Report outlined, amongst others, that most of the risks associated with the use of AI systems are often related to the use of non-AI models. What is new is the scale and speed of the use of these technologies as well as the complexity/opacity of the underlying models.

The report stresses the importance of more clarity on what level of explainability or interpretability is necessary for certain AI models, particularly for regulated activities.

Some of the best practices proposed relating to model risk include:

(i) Documenting and agreeing on a AI review and sign-off process for all new applications

(ii) Completing inventory of all AI applications in use and in development

(iii) Clearly documenting methods and processes for identifying and managing bias in inputs and outputs

(iv) Providing a clear explanation of AI application risks and mitigation

(vi) Implementing appraisal process for explainability approaches


Governance


A key characteristic of AI systems is their characteristics for autonomous decision making that will impact the governance framework including with the regard to accountability and responsibility. The report also outlines the importance of diversity of skills and perspectives to ensure an effective governance structure. Transparency and communication are also of key importance.

Some of the best practices proposed relating to model risk include:

(i) Strengthening the interaction between the data science and risk teams from the early stages of the model development cycle

(ii) Establishing a central committee to oversee firm-wide development and use of AI

(iii) Providing training and understanding of AI

(iv) Sharing good practices across the organization.


The AIPPF was launched on 12 October 2020 to encourage dialogue between the public and private sectors on the use of AI in the financial services sector.



Recent Posts

See All

Product Corner - VAs : Quèsaco

Virtual Assets (VAs) or crypto assets refer to : “any digital representation of value that can be digitally traded, transferred or used for payment. It does not include digital representation of fiat

Upcoming Regulatory Deadlines to Watch

10 Aug 2023 - Deadline to submit comments to FCA Guidance Consultation (GC23/1) on crypto asset financial promotions. 5 Sep 2023 - Effective date of SEC Cybersecurity Risk Management, Strategy, Govern

bottom of page