EU AI Act: Five Months to Comply
The deadline is 2 August 2026. The obligations are real. The clock is already running.
If you work in a regulated sector and you have not yet mapped your organisation’s exposure to the EU AI Act, now is the time to start. Not after summer. Now.
Five months sounds like a reasonable amount of time. In practice, it isn’t. Especially if you’re using AI in areas like HR, credit decisions, medical devices, or critical infrastructure. For those use cases, the full set of “high-risk” obligations kicks in from August 2026.
This is not an abstract compliance exercise. It is operational. Legal, compliance, risk, and technology teams all need to be aligned, working from the same understanding of what’s in scope and what needs to change.
What the EU AI Act Actually Requires
The EU AI Actclassifies AI systems into four risk tiers, each carrying different obligations.
At the top, unacceptable riskAI – things like, social scoring systems, biometric scraping, and manipulative AI – has been banned outright since February 2025. If your organisation was using any of these, that window has already closed.
Below that sits high risk, and this is where most regulated organisations need to focus their attention right now. AI systems that touch HR processes, credit decisions, medical devices, or critical infrastructure fall into this category. From August 2026, deployers of these systems must have human oversight mechanisms in place, maintain six months of activity logs, and conduct fundamental rights impact assessments. None of that is optional, and none of it is trivial.
The limited risk tier – covering chatbots and deepfakes – requires transparency disclosures. Users must know when they are interacting with an AI system.
At the base, minimal risk AI, which covers most general software and consumer tools, carries no specific requirements under the Act.
But the boundary between minimal and limited, or limited and high risk, is not always as clear as it may seem…
You Are Probably Both a Provider and a Deployer
One of the most important distinctions in the Act is the difference between:
- Providers (those that builds or places an AI system on the market)
- Deployers (those that use AI in a professional context)
Each role comes with its own set of responsibilities.
Providers must complete conformity assessments, apply CE marking, produce technical documentation, and register in the EU AI database. Deployers must implement human oversight, maintain activity logs, and carry out fundamental rights impact assessments.
In reality, many organisations fall into both categories.
If you have customised an AI tool for internal use, or if you have integrated a third-party AI system into a product you then offer to customers, you may be wearing both hats at once. The Act is explicit that this can apply simultaneously, and the compliance requirements stack rather than cancel out.
If your organisation has not yet clarified which role – or roles – it occupies under the Act, that is the first conversation to have.
See Transvault Intelligent Ally In Action
The Penalty Structure Is Not Hypothetical
The enforcement provisions of the EU AI Act are among the most significant of any technology regulation to date.
- Up to €35 million or 7% of global annual turnover, whichever is higher.
- Up to €15 million or 3% of for high-risk or transparency failures.
SME caps apply, but the figures remain material even for smaller organisations.
These are not theoretical maximums. Regulators across Europe have demonstrated, through GDPR enforcement, that they are prepared to impose significant penalties where governance failures are clear and documented.
In practice, the biggest risk isn’t necessarily that your system fails, it’s that you can’t demonstrate that you understood the risks and out the right control in place.
This is About Governance, Not Just Documentation
It is tempting to treat regulatory compliance as a documentation exercise. Produce the impact assessment, file the technical records, update the policy, move on. But as we noted when the European Parliament disabled its own AI tools in February 2026 – the institution that wrote this law – the gap between legislating AI and actually governing it is where organisations get caught out.
A good example is the European Parliament disabling its own AI tools earlier this year. The issue wasn’t that AI existed, it was that they couldn’t clearly verify what data was being sent where. So, they switched it off.
Real compliance under the EU AI Act means:
- Knowing which AI systems your organisation uses,
- What data they process,
- How they are being overseen,
- Whether you can produce evidence of all of that when asked.
It means the accountability for AI governance sits within your risk and compliance framework, not just with your IT team.
Four Questions to Ask Before August
The structure of the Act is clear. But structure is not readiness. Here are four practical questions your organisation should be able to answer today.
- Have you classified your AI systems against the Act’s risk tiers? Not just the headline tools, but the embedded features, the third-party integrations, and the AI capabilities in the platforms you already use.
- Do you know whether you are a provider, a deployer, or both? And have you mapped the obligations that flow from each role?
- Are your high-risk systems accompanied by the governance infrastructure the Act requires? Human oversight, activity logging, and fundamental rights impact assessments are not optional additions – they are the baseline.
- Can you produce evidence of your compliance posture if asked? Not a policy document. Evidence. Audit trails, logs, documented assessments, and accountable ownership.
Start Before Summer.
The August deadline is not the beginning of the compliance journey – it is a point by which the journey should already be well underway. Classification, role identification, obligation mapping, and documentation all take time. Organisations that begin in July will be building under pressure.
The EU AI Act is, at its core, an attempt to create structured confidence in AI systems across the continent. As we have seen before, legislation sets the expectation. Governance is what meets it.
If your organisation is working through its EU AI Act exposure and needs support designing a proportionate, audit-ready compliance framework, Transvault Intelligent Ally works with regulated organisations to turn that structure into operational reality.
Get in touch to talk through where you are and what needs to happen next.
Not sure where to start with AI governance? Speak to an expert
See Transvault Intelligent Ally In Action