What the Mills Review covers

The Mills Review examines AI's impact across three areas: how firms are using AI in client-facing processes, how accountability is assigned when AI-assisted decisions are made, and how existing regulatory frameworks apply to AI deployment.

The review follows a Treasury Select Committee report in January 2026 that warned the current regulatory approach risks "serious harm to consumers and the financial system" if governance does not keep pace with AI deployment. The FCA's response was to commission a structured, multi-year review rather than immediate rule changes — which means the review is shaping expectations now, before formal regulation arrives.

For firms operating in regulated professional services, this matters regardless of whether you are directly FCA-supervised. The Mills Review is establishing the standard of conduct that regulators across professional services will reference as AI governance expectations develop.

What it means for IFAs and financial advisers

For IFAs operating under Consumer Duty, AI use in client-facing processes creates specific accountability requirements that are already active — not pending future regulation.

Consumer Duty requires firms to demonstrate good outcomes for retail clients. If AI is involved in generating recommendations, filtering information presented to clients, or processing client data in ways that influence advice, the firm must be able to demonstrate that the AI-assisted process produced outcomes that meet the Consumer Duty standard.

The FCA has made clear that existing frameworks — Consumer Duty, SM&CR — apply to AI-assisted decisions. Under SM&CR, the Senior Manager responsible for a business area remains accountable for outcomes from that area, including outcomes that were AI-assisted. A Senior Manager cannot disclaim responsibility on the basis that the decision was made or influenced by an AI system.

In practice, this means IFAs need to be able to answer three questions about any AI tool they use in client-facing work: who approved this tool's use, how is its output reviewed before it reaches the client, and what happens when it produces an unexpected result?

What it means for solicitors and accountants

Solicitors fall under SRA oversight and accountants under ICAEW, ACCA or AAT — not directly under FCA supervision. But the direction of travel is consistent across all regulatory bodies. Regulators are increasing scrutiny of AI use in client-facing and compliance-adjacent work, and the questions being asked are the same regardless of which regulator is asking.

The SRA has published guidance making clear that solicitors remain personally responsible for work product, including work where AI was involved in drafting, research or analysis. A solicitor cannot rely on "the AI drafted it" as a defence if a document is incorrect, misleading or harmful to a client. Professional liability follows the professional, not the tool.

For accountants, the equivalent question arises in tax advice, compliance work and audit support. If AI-assisted analysis contributes to advice that is later found to be incorrect, the accountant's professional liability is unchanged. The accountability framework does not shift because AI was involved in the process.

The practical governance gap

Research from Cambridge University's Centre for Alternative Finance (April 2026, 628 firms) found that 52% of organisations are already piloting or deploying autonomous AI agents, but accountability frameworks are absent or fragmented. 65% of firms do not monitor their AI systems for bias or discrimination despite it being a regulatory priority.

These are not firms that have ignored governance. Many have AI policies in draft or AI working groups in progress. The gap is between acknowledging that governance matters and actually having documentation that would hold up if a regulator asked to see it tomorrow.

The firms most exposed are those that have adopted AI tools — particularly productivity tools, drafting assistants, and data analysis platforms — without formally inventorying them, assigning accountability, or documenting how outputs are reviewed. This is a common pattern: tools are adopted by individual partners or team leads, used in client work, and never formally registered as part of the firm's AI posture.

What good governance looks like

A defensible AI governance position for a professional services firm requires four elements:

These four elements are not onerous in themselves. For most professional services firms, the work is primarily documentation and process — making explicit the accountability that should already exist, rather than creating new structures from scratch.

The risk of delay is not primarily regulatory enforcement in the short term. It is the practical exposure that arises when a client dispute, a partner departure or a regulatory inquiry surfaces AI use that was undocumented, unreviewed and unaccountable.

The timing question

The Mills Review is ongoing. Formal regulation may be two to three years away for many firms. But the standard against which AI governance will eventually be assessed is being established now — through FCA guidance, regulatory speeches, and the early cases where AI governance failures have been cited in complaints and disputes.

Firms that build their governance position now do so at lower cost and lower pressure than firms that wait for formal regulatory requirements. The documentation required is the same; the difference is whether it is built as a deliberate, manageable project or as an emergency response to a regulatory prompt.

PraxiumAI's AI Governance Pack builds the four governance elements described above as a fixed-scope engagement, delivered in three to four weeks, producing documentation suitable for regulatory review. It is designed for professional services firms that are using AI and need to be able to demonstrate they are doing so responsibly — without building an internal AI governance function from scratch.