Consumers and the UK financial system are being exposed to “serious harm” by the failure of government and the Bank of England to get a grip on the risks posed by artificial intelligence, an influential parliamentary committee has warned.
In a new report, MPs on the Treasury committee criticise ministers and City regulators, including the Financial Conduct Authority (FCA), for taking a “wait-and-see” approach to AI use across the financial sector.
That is despite looming concerns over how the burgeoning technology could disadvantage already vulnerable consumers, or even trigger a financial crisis, if AI-led firms end up making similar financial decisions in response to economic shocks.
More than 75% of City firms now use AI, with insurers and international banks among the biggest adopters. It is being used to automate administrative tasks or even help with core operations, including processing insurance claims and assessing customers’ credit-worthiness.
But the UK has failed to develop any specific laws or regulations to govern their use of AI, with the FCA and Bank of England claiming general rules are sufficient to ensure positive outcomes for consumers. That means businesses have to determine how existing guidelines apply to AI, leaving MPs worried this could put consumers and financial stability at risk.
“It is the responsibility of the Bank of England, the FCA and the government to ensure the safety mechanisms within the system keeps pace,” said Meg Hillier, chair of the Treasury committee. “Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying.”
The report flagged a lack of transparency around how AI could influence financial decisions, potentially affecting vulnerable consumers’ access to loans or insurance. It said it was also unclear whether data providers, tech developers or financial firms would be held responsible when things went wrong.
MPs said AI also increased the likelihood of fraud, and the dissemination of unregulated and misleading financial advice.
In terms of financial stability, MPs found that rising AI use increased firms’ cybersecurity risks, and left them overly reliant on a small number of US tech companies, such as Google, for essential services. Its uptake could also amplify “herd behaviour”, with businesses making similar financial decisions during economic shocks and “risking a financial crisis”.
The Treasury committee is now urging regulators to take action, including the launch of new stress tests that would assess the City’s readiness for AI-driven market shocks. MPs also want the FCA to publish “practical guidance” by the end of the year, clarifying how consumer protection rules apply to AI use, and who would be held accountable if consumers suffer any harm.
“By taking a wait-and-see approach to AI in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm”, the report said.
The FCA said it had already “undertaken extensive work to ensure firms are able to use AI in a safe and responsible way”, but would review the report’s findings “carefully”.
A spokesperson for the Treasury said: “We’ve been clear that we will strike the right balance between managing the risks posed by AI and unlocking its huge potential.”
They added that this involved working with regulators to “strengthen our approach as the technology evolves”, and appointing new “AI champions” covering financial services “to ensure we seize the opportunities it presents in a safe and responsible way”.
A spokesperson for the Bank of England said it had “already taken active steps to assess AI-related risks and reinforce the resilience of the financial system, including publishing a detailed risk assessment and highlighting the potential implications of a sharp fall in AI-affected asset prices. We will consider the committee’s recommendations carefully and will respond in full in due course.”
#exposed #harm #failure #tackle #risks #MPs #warn #Business
