Kara Snow: The Regulatory Implications of AI for Lenders

0

PERSON OF THE WEEK: AI is new for the mortgage industry – and every other industry for that matter – but while lenders and developers are trying to find new applications for AI, regulators are taking a more jaundice view of this new technology and trying to rein in AI usage with compliance guardrails.

To get a sense of the state of play, at the moment, with regard to the regulatory implications of using AI in lending, MortgageOrb interviewed Kara Snow, senior regulatory counsel at Covius Compliance Solutions, who is tracking current regulations and seeing signs of where things could go next.

Q: Based on the current legislation and regulations to date, how are U.S. regulators looking at AI and what kinds of usage are they most concerned about?

Snow: Federal agencies have been looking at their existing authority under consumer protection laws to see how they can leverage the authority they already have against certain AI uses. For example, the CFPB and other federal regulators have said that the use of AI and/or other “black box” technology is not an excuse for UDAAP violations or lending discrimination. In September 2023, the CFPB published a circular on adverse action notices and credit denials when AI is used. 

AI has also come up in regulator/industry discussions over bias in home valuations. A group of regulators, including the CFPB, have already proposed a rule that would require lenders to demonstrate that if they are using algorithmic appraisals and AVMs that these models are free from bias. 

Last Fall, the Biden Administration issued an Executive Order on the development and use of AI by the government, which many people expect will shape future regulation of the private sector.

There are a number of state and local initiatives, as well. For example, Colorado issued rules last year to ensure that insurance companies using AI models don’t discriminate based on race, and New York City passed a new law targeting bias in AI and machine learning hiring tools.

Today, U.S. regulation is fragmented and piecemeal, but I think there is a good chance that moving forward the EU’s AI Act, which is expected to go into effect later this spring, will provide a framework that U.S. regulators will follow.

Q: You mentioned that a significant piece of legislation is about to come out of Europe, can you give us an overview and explain why you think this will be relevant in the U.S.?

Snow: The legislation that I was referring to is  The European AI Act, which the European Parliament just passed and which is expected to become effective later this Spring. It is likely to become a global prototype for future AI regulation. That’s what happened six years ago when the European Union (EU) adopted the Global Data Protection Regulation (GDPR).  Several U.S. states, including California have since used the EU standards as a model to develop their privacy regulations.

Q: How does the European AI Act regulate AI?

Snow: The new EU regulation put AI risk into four broad silos with various levels of restriction on each.

  • Unacceptable Risk under the new standards would include social scoring by government entities or AI systems designed to manipulate behavior. These would be completely prohibited.
  • High Risk uses, such as HR recruiting, credit scoring and underwriting, are permissible but with strict restrictions.
  • Limited Risk would include chatbots and AI-generated content fall into the limited risk category and are okay as long as the consumer knows these technologies are in use.
  • Minimal Risk, which is the lowest level of risk, would include things like spam filters.

Most of the rules are around high-risk AI systems, which covers use cases that are used by the financial services industry, like underwriting and credit scoring. The rules focus on avoiding bias; ensuring the quality of the data sets used to train the algorithms; being able to trace and explain the reasons behind AI decisions; and finally assuring a high level of accuracy.

AI users would also be required to demonstrate that that are providing clear and adequate information to consumers and that they are taking prudent steps to ensure privacy and security.

Limited-risk systems include AI chatbots, which many lenders are exploring to provide timely, high-quality customer service. Requirements for limited-risk systems focus primarily on disclosure, ensuring that consumers are aware that they are interacting with AI and have the ability to opt out and communicate with a person. Lenders will want to ensure that they are following EU requirements if their chatbots are accessible within the EU.

Q: Do U.S. financial services companies really need to worry about EU regulations?

Snow: When it takes effect, the European AI Act will apply broadly to any AI system developed or used in the EU. U.S. companies will come under this regulation if they do business in Europe or if they use AI to discriminate against European citizens in the US. In the case of mortgage lending, this could involve a U.S. credit denial, based on AI, for an Individual Taxpayer Identification Number (ITIN) borrower; an EU citizen’s interaction with a servicer’s chatbot; or using an AI system developed in the EU.

Q: Based on the new European AI Act and the current regulation in the U.S. what are the most sensitive areas of AI usage in your opinion?

Snow: When you think about where AI could have the most impact for lenders and servicers, underwriting, valuations, and coming up with new predictive analytics all come to mind. And all of these use cases would most likely fall into the EU’s high-risk category.  So pretty much any situation where AI influences decision-making—loan approvals, risk-based pricing, eligibility for loan modifications—would most likely get a fair amount of scrutiny under the new EU rules.

As servicers try to reduce costs, many are increasing their reliance on chatbot technology. While this would fall into a lower risk category under the EU, there are still requirements that servicers would need to take into account. We’ve also seen U.S. regulators express concern about the quality of the responses that these bots provide in servicing situations. But compliance for these uses should be easier and less costly to achieve.

Q: Finally, what advice would you give lenders and servicers trying to get ahead of the next round of regulations?

Snow: Take a close look at the final EU regulation, because it is most likely going to provide at least a frame of reference, if not a framework for U.S. state and local regulations.

Also, keep in mind that although these rules apply to citizens of EU countries, U.S. lenders must follow them as well when they interact with these consumers in the U.S. or online.

As lenders develop a company-wide AI framework, they should be sure they are considering how bias can corrupt the information that they are using to train the AI and machine learning tools.

It’s important to follow AI disclosure rules and ensure tools adequately protect consumers’ private information.

Finally, because AI tools learn and change, this isn’t something that can be done once: the tools should be regularly assessed to ensure lack of bias and security. 

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments