The Intermediary – March 2026 - Flipbook - Page 83
T E C H N O L O GY
Opinion
AI powers efficiency,
but guardrails
are essential
A
rtificial intelligence
(AI) has been under
the spotlight again,
as Barclays said it was
accelerating its use
of AI aer a £1.7bn
cost-cuing drive over the past two
years. Its annual profits rose by 13%
to £9.1bn, with the bank set to make a
further £2bnb in cuts by 2028.
Elsewhere, we learned that experts
think AI soware used to promote
fake news aer major incidents – to
make money for social media users –
should be included in a forthcoming
investigation into false advertising.
The Alan Turing Institute’s
Centre for Emerging Technology
and Security found that fake news
posted online aer the Southport
murders was partly driven by Al to
make money online. It recommended
that Ofcom examine the issue
during a consultation due to take
place this summer. AI tools that
generate content based on trending
topics, optimised for sensationalism
“could have an outsized impact,”
researchers warned.
These stories get to the heart of the
way the mortgage industry needs to
approach AI. On the one hand, there
is enormous potential – as Barclays
knows. We must embrace AI, if only
because failing to do so will rapidly
lead to obsolescence. On the other
hand, there clearly are risks, and
threats and dangers.
This is why I champion the idea
of embracing AI – while ensuring
guardrails are in place.
AI has become an important tool in
plenty of industries, and mortgages
is no exception. Lenders and servicers
can use AI to process applications
faster, assess risks more accurately
and improve customer interactions.
In originations, AI can helps analyse
borrower data to predict default risks
and suggest suitable products. For
servicing, it automates routine tasks
such as payment reminders and query
handling. This shi promises to make
the mortgage process more efficient
for all parties involved.
That’s not even the best bit. The
real value lies in how AI can drive
beer decisions while maintaining
trust. In underwriting, for example,
AI models can review credit histories
and property values to offer precise
lending recommendations. This can
accelerate approval times and reduce
the burden on human underwriters.
At the same time, AI allows staff to
focus on complex cases that require
human judgement, transforming
operations and slashing costs.
Race to embrace
Chatbots powered by AI handle basic
enquiries around the clock, providing
instant responses to questions on rates
or documentation. Voice analytics
tools transcribe calls and flag issues
in real time, helping agents respond
effectively.
But the race to embrace AI cannot
be a free-for-all. This is financial
services, and we must balance
innovation with responsibility. We
must take measures to ensure AI
systems operate safely, fairly and in
line with regulations. When decisions
affect people’s lives, these guardrails
are essential.
As a provider of lending soware,
Target Group stresses the importance
of seing clear guidelines during
AI training to keep systems focused
on business needs. You need to
implement strict parameters to ensure
the AI sticks to relevant topics and
delivers value. You need to ensure AI
avoids extraneous output that confuses
borrowers or leads to poor decisions.
UDAY BOLA
is head of solution design
at Target Group
In mortgage origination, these
safeguards can help mitigate risks
in data handling. Given AI processes
sensitive information, robust data
privacy measures are essential.
Guardrails should include encryption
access controls and regular audits.
Platforms like NICE CXone can
generate guaranteed transcripts of
every customer call, creating a factual
audit trail. This removes ambiguity
and offers proof that interactions
meet standards such as treating
customers fairly.
By documenting actions for
vulnerable borrowers, lenders can
show due diligence in audits, reducing
the risk of penalties.
The Financial Conduct Authority
(FCA) requires firms to demonstrate
that AI does not disadvantage certain
groups. Models trained on historical
data are in danger of reflecting past
inequalities. But algorithms can be
adjusted to improve outcomes for
protected people without sacrificing
accuracy. This is one of the reasons we
advocate for thoughtful deployment
of AI. It’s not a silver bullet. It’s a force
multiplier. But it needs to be used with
care. Starting small with proof-ofconcept tests allows firms to identify
these sorts of issues early.
When it is used this way, AI is
absolutely a force for good. By way of
example, it can detect vulnerability
via sentiment analysis, spoing
phrases like “I’m struggling”
and suggesting options such as
forbearance.
AI is, therefore, in a position
to transform how lenders handle
vulnerable customers. Real-time
insights and proactive care, enabled by
AI, can turn compliance from a cost
into a competitive advantage. ●
March 2026 | The Intermediary
81