Federal regulators’ artificial intelligence initiative is a promising development for the financial sector


The regulators of federal financial institutions collectively issued a Request for Information and Comment (RFI) on March 31, 2021, to better understand how artificial intelligence (AI) and machine learning (ML) are being used in the financial services sector and identify areas where further “clarification” (possibly in the form of informal guidance or more formal regulation) might be useful.

Published collectively by the Federal Reserve Board of Governors (Fed), the Consumer Financial Protection Bureau, the Federal Deposit Insurance Corporation, the National Credit Union Administration, and the Office of the Comptroller of the Currency, the RFI invites comment on 17 focused questions. on how financial institutions ensure the quality of AI / ML outputs and manage model risk.

Benefits and Risks of AI / ML in the Financial Industry

As highlighted in the RFI, developments in AI / ML present significant opportunities for improving banking operations and the delivery of financial services. Currently, financial institutions are using and exploring profitable uses of AI / ML related to, among others, fraud management, banking secrecy compliance, improving credit decisions and underwriting, and more effective customer service experiences.

This includes small and medium-sized community institutions, which actively use and explore AI / ML, but which have historically struggled to keep up with technological / FinTech developments. Thus, an important aspect of any AI / ML initiative will be to ensure that these institutions are not competitively disadvantaged by advances in the industry’s AI / ML that they are not able to afford. to afford or implement due to resource constraints.

RFI also notes that AI / ML offers significant new benefits and creates opportunities to expand access to underserved and “unbanked” people. But, as the RFI explains, AI / ML also presents a number of potential risks, including illegal discrimination when models produce biased results, operational vulnerabilities when processes become dependent on technology, and issues with risk management regarding the soundness of models.

Regulatory interest for AI / ML

The issues raised in the RFI were foreshadowed in the recent speech by Fed Governor Lael Brainard, “Supporting the Responsible Use of AI and Fair Outcomes in Financial Services,” at the Academic Symposium on AI hosted by the Fed earlier this year. In this speech, Governor Brainard signaled the collaboration of regulators to explore AI issues and noted that this RFI was in the works. She also explained that “[r]regulators need to provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolves. “

As the RFI notes, there are already a significant number of laws, regulations, guidance documents and other publications from various agencies regarding the use of AI / ML. The new RFI is by no means the first survey or agency investigation into AI / ML. For example, the CFPB issued a RFI in 2017 regarding the use of alternative data and modeling techniques in the credit process in response to its 2015 study on ‘dark credit’ data points which found that the invisibility of credit and the problems that accompany it disproportionately affect blacks and Latino consumers.

Later in 2017, the CFPB issued a letter of no action (the first) under the Equal Credit Opportunity Act (ECOA) and Regulation B to a provider of direct-to-consumer personal loans that founded its underwriting and pricing decisions on AI / ML and alternative data. In 2019, the CFPB shared highlights from its analyzes of the lending platform’s use of AI / ML, which revealed that the model tested in fact increase access to credit compared to traditional models and did not involve equitable lending problems. (The lending platform received an updated no-action letter in November 2020 under both the ECOA and the CFPB’s authority on unfair, deceptive and abusive acts and practices.)

The CFPB reiterated its commitment to encouraging the use of AI / ML to expand access to credit last summer, posting an issue-focused ‘innovation spotlight’ on its blog. explainability (more below) when using AI / ML based subscription and pricing. models. The blog post concluded, presaging RFI 2021, that “stakeholder engagement with the Bureau[] may ultimately be used to support a change in a rule or its official interpretation. “

Regulators outside of the financial space have also shown increased interest in promoting transparency and explainability when AI / ML is used for decision making and protection against potentially discriminatory outcomes of the business. IA / ML. California’s recent privacy rights law calls on the newly formed state’s privacy regulator to enact rules on “automated decision-making,” including “requiring companies to respond to inquiries. access includes meaningful information about the logic involved in such decision-making processes. “1

Legislation on the issue of automated decision-making systems has been proposed (but not passed) in several states, including California, Washington, Maryland, and Vermont, though these bills focus primarily on buying and selling. use of AI models by government agencies. The Federal Trade Commission has also been active in publishing guidance on the use of AI tools, including more recently an April 2020 blog post highlighting the importance of consumer transparency and empowerment. wise decision, and NIST is leading the effort to develop principles around explainable AI.


The RFI asks 17 specific information research questions, grouped around a few key topics, in particular:

  • Explainability: Usually understood as the process by which the basis of certain AI / ML system outcomes (decisions, recommendations, actions) is described or disclosed. The Agencies are particularly interested in knowing how often financial institutions use post-hoc methods to assess the conceptual soundness and limitations associated with these methods. Lack of explainability can prevent a financial institution from understanding the conceptual soundness of AI / ML, which RFI says can lead to lower reliability when AI / ML is used in new contexts. In line with the questions posed by the CFPB in its blog post from July 2020, a lack of explainability could also prevent a company from demonstrating compliance with legal and regulatory obligations such as, for example, anti-discrimination and anti-discrimination issues. consumer protection arising from ECOA and the Fair Credit Reporting Act (FCRA).
  • Fair loan: The RFI asks for comments on:
    • (1) Techniques used by financial institutions to ensure fair loan compliance when AI / ML is involved in credit decisions, even for less transparent AI / ML programs;
    • (2) How AI / ML can perpetuate existing biases that can lead to disparate treatment and discrimination, and how to reduce these risks; and
    • (3) What approaches are taken to identify the reasons for a credit decision when AI / ML is used (which relates to the question of explainability above, as well as the requirement of ” adverse action ”from the ECOA whereby creditors disclose the specific reason behind an adverse action against a plaintiff).
  • Quality, processing and use of data: RFI asks how financial institutions deal with the risks and potential gaps related to the quality and use of data that shapes and ultimately helps design AI / ML predictions or categorizations. Agencies specifically ask for information on the risk management issues of using alternative data to traditional data in this context and if / how alternative data may be more effective for specific uses.
  • Dynamic update: When AI / ML continues to learn, independently, in response to new data. In some cases, dynamic updating can cause AI / ML to evolve in unintended and potentially dangerous ways. The RFI requests information on how financial institutions deal with dynamic update issues.
  • Overfitting: Agencies are looking for information on how financial institutions deal with the risks of overfitting, instances where an algorithm “learns” from patterns in training data that are idiosyncratic and not representative of the population as a whole. RFI asks if there are any obstacles or challenges posed by over-learning that impact the use, development or management of AI / ML programs.
  • Cyber ​​security risks: As AI / ML is a data-intensive technology, it could be vulnerable to cyber threats. As a result, the RFI is seeking information on how financial institutions are handling these issues and to what extent they have encountered cybersecurity issues specifically when it comes to AI / ML.
  • Community institutions: RFI asks if community institutions face any particular challenges in the development, adoption and use of AI.

The RFI also invites comments on the management of AI / ML from third parties and includes a general call for any other relevant information. While the RFI notes that the use of AI can increase privacy concerns, it actually does not ask any questions aimed at eliciting comment on transparency and consumer choice issues related to the use of the Internet. IA by a financial institution.

Initial thoughts

Comments, which relate either to the specific questions posed in the RFI or to a more general response, should be submitted by June 1, 2021.

The interest of these agencies in publishing guidance on AI / ML is a positive development. For the industry to move forward with the implementation of new initiatives supported by AI / ML solutions, it is imperative that the agency’s regulatory and compliance apparatus develop the resources, understanding and the expertise to oversee, regulate and oversee the deployment and implementation of emerging AI / ML programs.

This initiative does not necessarily presage new rules in this area, at least in the short term. In its 2020 blog post, the CFPB specifically noted that new regulations could eventually emerge. But for now, the RFI is designed to help educate agencies in what will almost certainly be a long process before any real regulatory change occurs.

The RFI is nevertheless a golden opportunity to help Agencies reflect on these complex questions and, in so doing, guide the development of IA / ML regulations in the coming decade. Given the complex nature of the information requested in the RFI, stakeholders interested in commenting should consider collecting relevant information early in order to have sufficient time to identify issues and priorities for comment by the deadline for comment. June 1, 2021.

Leave A Reply

Your email address will not be published.