Artificial intelligence (AI) is reshaping the world at remarkable speed. Our Chief Operating Officer, Marc Harris looks how these changes are impacting the complaints, questions and experiences consumers bring to us.
Artificial intelligence (AI) is reshaping the world at remarkable speed from the way people access services to the way businesses design and deliver them. In financial services this trend is no different.
The Financial Conduct Authority (FCA) recently launched a review into the long-term impact of AI – The Mills ReviewThe Mills Review – which signals how transformative these technologies have already become in financial services. At the Financial Ombudsman Service, we see these changes not as abstract trends but through the impact on real complaints, questions and experiences consumers bring to us every day.
As the UK’s dispute resolution service for financial services, our focus is simple – to resolve complaints fairly and efficiently, while ensuring the system works for everyone, including the most vulnerable people. That’s why we’re sharing our early insights into how AI is influencing the consumer complaints landscape and how we’re adapting to meet both the opportunities and challenges ahead.
Consumers’ use of AI is fundamentally changing the complaints we see
Over the past year, we’ve seen a clear rise in consumers using generative AI to help draft complaints or communicate with us.
Used well, AI can be a powerful enabler. It can help people organise their thoughts, improve clarity or overcome language and literacy barriers. For some vulnerable consumers and people with neurodevelopmental issues, it can make a significant difference by helping them to access the service and express complex experiences more confidently and clearly.
When AI is used well and submissions are clear, accurate and well structured, it can be incredibly beneficial in supporting us to get to the heart of the matter more quickly, helping us deliver fair outcomes efficiently.
However, we’re also seeing the other side of the coin, which is in some cases restricting our ability to progress cases as quickly and informally as our consumers and businesses require. Excessive or uncritical use of generative AI can lead to:
- needlessly long or unfocused submissions
- incoherent narratives that obscure rather than clarify
- 'hallucinations' which include fabricated laws, misquoted regulations or invented past decisions
- erroneous service complaints, that can clog up the service and ultimately mean slower response times for other users.
In a small sample of cases we recently reviewed, up to a third of responses to our initial assessments appeared to have been generated or heavily assisted by AI. While some of these were helpful, others required significant caseworker time simply to verify the accuracy of the content, something that ultimately delays outcomes for everyone.
We know firms are seeing similar trends. Long, AI generated complaints containing inaccuracies are beginning to generate unnecessary burden and extend response times. While this is not yet a major operational issue for the sector, it is an emerging risk that warrants coordinated attention.
Professional representatives and their use of AI
We are also seeing evidence of claims management companies and legal professional representatives using AI to generate submissions on behalf of consumers. In the most extreme cases, these exceed 200 pages in response to a six-page provisional decision and contain many errors or misunderstandings.
Where professional representatives are charging consumers significant fees for such content, this is especially concerning. We are currently sharing insights between partner agencies such as the FCA and the Solicitors Regulation Authority (SRA) to begin to unpick issues around the potential for poor conduct and client detriment in this area.
AI’s growing role in financial firm’s operations
Complaints about firms’ use of AI remain low, but across the industry adoption is growing rapidly. Many firms are starting to use AI to triage cases, route queries or analyse customer sentiment – helping resolve issues faster.
These tools offer potential benefits, but they can also introduce risks if poorly designed or opaque. Rigid or automated systems might unintentionally filter out complex cases or miss signs of vulnerability. As more decisions, or elements of decisions, are shaped by AI, consumers must be able to understand how outcomes were reached and receive clear explanations in plain English.
Also, consumer groups, such as Citizens Advice, have reported they are seeing a rise in complaints where people have sought financial ‘advice’ from AI. In instances where ‘advice’ has been obtained from unregulated individuals, firms or AI platforms, consumers are unlikely to be able to rely on the Financial Ombudsman Service for a resolution should that advice turn out to be wrong.
Transparency, fairness and the ability to challenge an outcome are foundational principles of an effective financial redress system. They must not be compromised as AI becomes more embedded in firms’ processes.
How the Ombudsman is using AI with human judgement at the core
At the Financial Ombudsman Service, we are adopting AI responsibly. Our goal is to help our people get to the heart of cases faster so we can resolve matters quickly and to a high standard. However, it’s worth stressing that we will never use AI to replace the judgement of our experts.
We’re using AI in line with our published principles, ensuring accountability, fairness and human oversight at every step. We’ve strengthened our internal governance, established AI leads across our organisation and developed guidance, for both case handlers and consumers, on the appropriate use of AI in complaints.
Crucially, our decisions will remain human. AI will support our investigators, but it won’t replace the expertise and discretion required to decide what the right outcome should be in each case.
A system wide challenge that requires sector-wide collaboration
AI will continue to evolve and so will the ways consumers and firms use it. Some of the changes ahead we can already anticipate, others will emerge in ways none of us can yet predict.
What’s clear is that maintaining a fair, accessible and trusted redress system in an AI-driven world requires consistent guidance, shared insight and close collaboration with partners across the sector. We welcome the FCA’s focus on this through the Mills Review and look forward to continuing to contribute our evidence, experience and expertise.
As AI reshapes financial services, our commitment remains the same: to ensure consumers and businesses can access a system that delivers fair outcomes, swiftly and effectively.