An influential committee of MPs has warned the UK financial system is inadequately prepared for AI-driven market shocks and risks. The Treasury Select Committee highlighted "worrying" evidence that regulators need a more proactive approach to protect against major AI-related incidents.
Dame Meg Hillier, the committee's chairwoman, expressed serious concern. "Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk," she said.
The government responded by appointing two AI champions for financial services, effective January 20, 2026. Harriet Rees, group chief information officer at Starling Bank, and Dr Rohit Dhawan, head of AI and advanced analytics at Lloyds Banking Group, will spearhead safe AI rollout in the sector in unpaid roles. They aim to accelerate adoption at scale, identify innovation areas, and address barriers to implementation.
The committee's report revealed a critical gap. The UK currently has no AI-specific legislation or financial regulation. The Financial Conduct Authority and Bank of England rely on existing frameworks to supervise firms' AI use. The committee found this approach leaves firms with little practical clarity on how existing rules apply to AI, creating uncertainty and potentially increasing risks.
Regulators respond to concerns
The Bank of England welcomed the report, stating it has already assessed AI-related risks and reinforced financial system resilience. The FCA highlighted its extensive work on safe AI use, including its AI Lab launched in April 2025 and a joint statutory code of practice with the Information Commissioner's Office announced in June 2025.
The Treasury committed to balancing AI risks and potential. Economic Secretary Lucy Rigby will receive reports from the AI champions, who are tasked with ensuring safe and responsible opportunities in the sector.
Specific risks identified
The committee's inquiry exposed significant threats to consumers and financial stability. Over 75 percent of UK financial services firms now use AI, with insurers and international banks showing the biggest take-up. Concerns include lack of transparency in AI-driven credit and insurance decisions, financial exclusion for disadvantaged customers, misinformation from AI search engines, and increased fraud.
AI-driven market trading could amplify "herding behaviour," potentially triggering financial crises.
UK firms rely heavily on a small number of US technology companies for AI and cloud services, creating concentration risk.
The committee recommended designating critical AI and cloud providers under the Critical Third Parties Regime for improved oversight.
The report acknowledged AI's potential. "AI offers important benefits, including faster services for consumers and new cyber defences for financial stability, our inquiry revealed significant risks to consumers and financial stability, which could reverse any potential gains," it stated.
The committee urged the FCA to publish practical guidance on AI by the end of the current year, detailing consumer protection rules and accountability for AI-caused harm.
Concerns about liability have created a "chilling effect" on high-end AI adoption in the sector.
The committee also recommended specific stress-testing for AI-driven market shocks by both regulators.
Note: This article was created with Artificial Intelligence (AI).

2 godzin temu












