In these times, double down — on your skills, on your knowledge, on you. Join us August 8-10 at Inman Connect Las Vegas to lean into the shift and learn from the best. Get your ticket now for the best price.
Federal regulators are keeping a close eye on problems that are arising as banks, mortgage servicers and other financial services providers turn over more of their customer service workload to increasingly sophisticated chatbots.
In a research report published Tuesday, “Chatbots in consumer finance,” the Consumer Financial Protection Bureau highlighted recent developments in the use of chatbots and artificial intelligence to provide customer service — and surfaced some of the complaints it’s received from consumers.
While chatbots have an established track record of helping resolve basic inquiries, “their effectiveness wanes as problems become more complex,” the CFPB report concludes.
When chatbots go awry, consumers may at best be left feeling frustrated that they’ve wasted their time, the report reads. But financial institutions risk eroding trust or even violating the law if consumers who are unable to obtain tailored support from a human get inaccurate or incomplete information from a chatbot, are charged junk fees or have personal information revealed.
“As sectors across the economy continue to integrate ‘artificial intelligence’ solutions into customer service operations, there will likely be a number of strong financial incentives to substitute away from support offered in-person, over the phone, and through live chat,” the report warns. “Deficient chatbots that prevent access to live, human support can lead to law violations, diminished service, and other harms.”
With the nation’s 10 largest banks all relying on chatbots to handle some of their customer service workloads, close to 40 percent of Americans interacted with a bank chatbot last year, the report notes.
While simple, rule-based chatbots have been around for some time, banks are embracing more sophisticated technology employing “large language models” (LLMs), machine learning and artificial intelligence provided by third parties to handle customer service.
Wells Fargo’s chatbot virtual assistant, Fargo, uses Google Cloud and LLMs to process customer input and provide tailored responses. JPMorgan Chase and TD Bank rely on Kasisto Inc. to power their “conversational, financially focused” chatbots, while Citibank’s chatbot is powered by Interactions LLC, a “conversational AI” provider.
A Pollfish survey of 2,000 adults conducted by The Motley Fool Ascent in April found that 54 percent of Americans have used ChatGPT to recommend a financial product, such as a credit card, bank, mortgage lender or personal loan.
The CFPB, which collects complaints from the public on consumer financial products and services, highlighted some issues consumers have had with chatbots, including their:
- Limited ability to solve complex problems
- Difficulties in recognizing and resolving customer disputes
- Tendency to provide unreliable or insufficient information
- Failure to provide meaningful assistance
- Hindering timely access to human assistance
In one complaint highlighted in the report, a consumer who was trying to refinance their mortgage was unable to speak to a live representative at Experian to determine why their credit report had been classified as frozen. When logging in to their Experian account or communicating with the credit bureau’s chatbot, the consumer’s credit report was shown as unlocked and unfrozen. But the consumer complained that their mortgage lender was unable to obtain the report and spent two weeks trying to talk to a real person without any success.
Experian responded to the consumer and the CFPB and resolved the complaint by providing “non-monetary relief” but declined to provide a public response.
The CFPB report cited research published by contact center platform UJET Inc. in December, which claims that 78 percent of consumers end up turning to human customer support after failing to resolve their issues through automated service channels.
“When consumers need help from their financial institution, the circumstances could be dire and urgent,” the report warns. “If they get stuck in loops of repetitive, unhelpful jargon, unable to trigger the right rules to get the response they need, and they don’t have access to a human customer service representative, their confidence and trust in their financial institution will diminish.”
Staying on the right side of the law
In addition to reputational risk, financial institutions run the risk of running afoul of consumer protection laws if they rely on “deficient chatbots” as their primary mode of interacting with customers.
“Financial institutions run the risk that when chatbots ingest customer communications and provide responses, the information chatbots provide may not be accurate, the technology may fail to recognize that a consumer is invoking their federal rights, or it may fail to protect their privacy and data,” the report warns.
“Providing inaccurate information regarding a consumer financial product or service … could be catastrophic,” the report notes. “It could lead to the assessment of inappropriate fees, which in turn could lead to worse outcomes such as default, resulting in the customer selecting an inferior option or consumer financial product, or other harms.”
Regardless of the technology used, financial institutions have an obligation to keep personally identifiable information safe, the CFPB warned. But chatbots have many potential vulnerabilities.
When Ticketmaster UK partnered with Inbenta Technologies for services that included a “conversational AI” on its payments page, hackers targeted Inbenta’s servers to capture information inputted by users, the report reads. That cyberattack affected 9.4 million users, exposing details on 60,000 individual payment cards.
“The scope of security testing needed for AI systems like chatbots is extensive and requires both rigorous testing and thorough auditing of any third-party service providers involved in operations,” the report warns. “There are simply too many vulnerabilities for these systems to be entrusted with sensitive customer data without appropriate guardrails.”
Will consumers benefit?
Chatbots have the potential to generate $8 billion a year in cost savings in the banking and health-care sectors, or about 70 cents for each customer interaction, the CFPB report notes, citing a 2017 analysis by Juniper Research.
Mortgage loan servicer Mr. Cooper, which spends several hundred million dollars a year on call center operations to collect payments on nearly $1 trillion in loans, is launching a multiyear AI project that it expects will generate $50 million in annual savings at the outset.
But the savings that AI generates for businesses may not be passed on to consumers in the form of better products and services if financial institutions aren’t competing on customer service, the CFPB report notes.
“Given the structure of the markets for many consumer financial products and services, people may have limited bargaining power to push for better service when a provider is selected for them,” the report notes. “For example, there is little to no consumer choice in the case of selecting a mortgage servicer or credit reporting company.”
As AI-powered chatbots become increasingly common not only on bank websites but accessible through mobile applications and social media accounts sponsored by providers of other services, including mortgage lenders and servicers, real estate brokerages and insurance companies, the CFPB says it intends to monitor the long-term issues closely.
“The CFPB is actively monitoring the market, and expects institutions using chatbots to do so in a manner consistent with the customer and legal obligations,” the report concludes.
Get Inman’s Mortgage Brief Newsletter delivered right to your inbox. A weekly roundup of all the biggest news in the world of mortgages and closings delivered every Wednesday. Click here to subscribe.