For UK professional services firms, the question is no longer whether this applies. It is whether they are ready.

What the EU AI Act actually requires

The EU AI Act classifies AI systems by risk level. High-risk systems — those used in areas including credit scoring, employment decisions, access to essential services, and critical infrastructure — face the most stringent requirements. These are not theoretical categories. Resume screeners, client suitability tools, document review systems, and performance monitoring software may all fall into high-risk classifications without organisations realising it.

For high-risk systems, the Act requires five things before deployment and on an ongoing basis:

  1. A complete AI inventory — you cannot classify risk for systems you do not know you are running. Shadow AI (tools used by staff without formal IT or leadership approval) counts.
  2. Risk classification for every AI system mapped against the Act's categories.
  3. Continuous monitoring — not an annual audit, but ongoing oversight that proves governance is actively enforced.
  4. Immutable audit trails — every governance decision traceable, every exception documented.
  5. Human oversight mechanisms — high-risk systems need documented human-in-the-loop processes with clear accountability.

Does this apply to UK firms post-Brexit?

Yes. The EU AI Act applies based on where AI systems have effect, not where organisations are headquartered. A UK firm using AI in any process that affects individuals in the EU — or using AI tools built by EU-based providers — is within scope. Beyond the EU Act itself, the FCA's Mills Review (launched January 2026) is examining AI's impact on retail financial services in the UK and has signalled that existing regulatory frameworks including Consumer Duty and SM&CR apply to AI-assisted decisions.

The direction of travel from regulators on both sides of the Channel is consistent: if your organisation cannot explain what its AI is doing, who authorised it, and how it is monitored, that is a governance gap that is increasingly visible.

What "high-risk" means in practice

High-risk does not mean dangerous in a physical sense. Under the Act, a system is high-risk if it makes or influences decisions that significantly affect people's rights, safety, or livelihoods. Common examples across professional services contexts include:

Many organisations are running these systems without having formally classified them. The Act does not accept "we did not know" as a defence.

The five things needed before August

The August 2026 deadline is not the end of a runway — it is the point at which enforcement becomes active. Organisations that have not begun governance work now are already behind.

1. A complete AI inventory including shadow AI.
Start with a structured review of every AI tool in use across the organisation — not just the ones IT approved, but the ones staff are using day-to-day. This is consistently the first finding in any governance engagement: organisations have far more AI in use than leadership believes.

2. Risk classification.
Map every tool against the Act's risk categories. Most tools will be minimal or limited risk. The ones that are not need to be identified and addressed before August.

3. Continuous monitoring infrastructure.
Point-in-time assessments will not satisfy regulators. The Act requires ongoing monitoring — which means having a process, not just a report.

4. Audit trails.
Every governance decision needs to be documented and traceable. This means records of who approved what, what oversight processes exist, and how exceptions are handled.

5. Human oversight documentation.
For any high-risk system, documented evidence of human-in-the-loop processes — who reviews AI outputs, at what point, with what authority to override.

Where PraxiumAI can help

PraxiumAI's AI Risk and Readiness Report delivers an AI inventory, risk classification, governance gap analysis, and a board-ready output in two weeks. The AI Governance Pack delivers the accountability map, evidence pack, governance policy, and incident protocol needed for regulatory defensibility in three to four weeks.

Both engagements are fixed-scope with defined outputs and defined end dates. The August deadline makes three to four weeks the right window to start.