EU AI Act Compliance Consulting
Turn Europe's Landmark AI Regulation Into Your Competitive Advantage
The EU AI Act is now in force. Prohibitions are live. GPAI rules apply from August 2025. High-risk obligations follow in August 2026. T3's EU AI Act compliance consulting helps you move from uncertainty to audit-ready confidence — on time and without overengineering.
Feb 2025
Prohibitions & AI Literacy — Now Live
Aug 2025
GPAI obligations & governance — Imminent
Aug 2026
High-risk AI & enforcement — Countdown
€35M
Maximum fines (or 7% revenue)
Why T3
Deep Regulatory Expertise.
Practical Implementation Focus.
T3's EU AI Act advisory team combines frontline experience with the regulatory bodies shaping EU AI law, hands-on implementation across financial services, healthcare, and technology, and a pragmatic philosophy: compliance should enable innovation, not suffocate it. We anchor our work in the EU AI Act, ISO 42001, and the NIST AI RMF — giving you a coherent, globally portable compliance posture.
The Challenge
What Makes the EU AI Act So Complex?
The EU AI Act is not one deadline — it is a rolling wave of obligations that differ by role, risk tier, and deployment date. Most organisations are unsure where to start, which is why a structured EU AI Act readiness assessment is essential.
Risk classification uncertainty
Determining whether your AI systems are prohibited, high-risk, limited-risk, or minimal-risk — and which obligations attach — requires deep legal and technical analysis.
Overlapping regulatory scope
The AI Act intersects with GDPR, product safety directives, sector regulation, and forthcoming standards — creating a compliance web that siloed teams struggle to untangle.
Extra-territorial reach
Like GDPR, the EU AI Act applies to non-EU providers and deployers whose AI system outputs are used within the Union — catching many global companies off guard.
GPAI & systemic risk rules
Providers of general-purpose AI models face distinct obligations around documentation, copyright, and — for models with systemic risk — adversarial testing and incident reporting.
Conformity assessment burden
High-risk systems require rigorous conformity assessments, quality management systems, and technical documentation before market placement — with limited notified bodies available.
AI literacy mandate
Since February 2025, all operators must ensure staff involved in AI are adequately trained — yet the Act gives little guidance on scope, making practical implementation unclear.
Risk-Based Approach
Four Risk Tiers. Different Obligations.
The EU AI Act classifies AI systems based on the risk they pose. Understanding where your systems sit determines your entire compliance roadmap — and is the first step of any AI Act readiness assessment.
Unacceptable
Banned
Social scoring, manipulative subliminal techniques, real-time biometric ID in public spaces (narrow exceptions), emotion recognition in workplaces
High Risk
Strict Obligations
Healthcare, education, employment, critical infrastructure, law enforcement, migration — conformity assessments, risk management, human oversight required
Limited Risk
Transparency Duties
Chatbots, deepfakes, AI-generated content — users must be informed they are interacting with AI or viewing synthetic content
Minimal Risk
No Obligations
Spam filters, AI-enabled games, inventory management — voluntary codes of conduct encouraged but not mandated
Applicability
Who Needs to Act?
The EU AI Act applies across the entire AI value chain — and reaches well beyond EU borders.
Providers
Develop or place AI on market
Deployers
Use AI under their authority
Importers
Bring non-EU AI into the EU
Distributors
Make AI available on the market
Authorised Reps
Act on behalf of non-EU providers
Extra-Territorial Scope
The Act applies to providers and deployers outside the EU if their AI system's output is used within the Union — mirroring the GDPR's global reach.
Implementation Timeline
Key Dates You Cannot Miss
1 August 2024 — Entry into Force
The EU AI Act was published in the Official Journal and entered into force. No obligations apply yet — the clock starts.
2 February 2025 — Prohibitions & AI Literacy Live
Banned AI practices (social scoring, manipulative subliminal techniques, untargeted facial scraping) are now prohibited. Article 4 AI literacy obligations apply to all operators.
2 August 2025 — GPAI & Governance
Obligations for general-purpose AI model providers take effect. Member States must designate national competent authorities and adopt penalty rules. EU-level governance bodies must be operational. Codes of Practice ready.
2 August 2026 — High-Risk & Enforcement Begins
The bulk of the AI Act applies. High-risk AI obligations (Annex III), transparency rules (Art. 50), innovation sandboxes, and enforcement mechanisms all take effect. Penalties regime kicks in.
2 August 2027 — Full Application
Obligations for high-risk AI systems embedded in regulated products (Annex I / Art. 6(1)) and third-party conformity assessments apply. GPAI models placed on market before August 2025 must comply. The EU AI Act is now fully effective.
What We Offer
Comprehensive EU AI Act Compliance Services
T3 delivers specialised EU AI Act consulting across the entire compliance lifecycle — from initial risk classification and gap analysis through post-market monitoring and continuous improvement.
Risk Classification & Gap Analysis
Inventory all AI systems, classify by risk tier, map obligations, and produce a prioritised compliance roadmap with clear ownership and deadlines.
Governance & Documentation
Design AI governance frameworks, quality management systems, technical documentation packages, and risk management procedures aligned with ISO 42001 and the AI Act.
Conformity Assessment Support
Prepare for internal or third-party conformity assessments, liaise with notified bodies, evaluate CE marking readiness, and draft EU declarations of conformity.
Data Governance & Privacy
Align AI training data and processing with both GDPR and AI Act requirements — including bias detection, data quality assessment, and privacy-preserving techniques.
Post-Market Monitoring
Develop monitoring plans, incident reporting procedures, performance tracking dashboards, and continuous compliance audit processes for deployed systems.
AI Literacy & Training
Executive briefings, role-specific workshops, compliance champion programmes, and ongoing advisory — fulfilling the Article 4 literacy mandate with substance, not tick-boxes.
Our Approach
Five-Phase Compliance Methodology
A structured, repeatable process that takes you from initial EU AI Act readiness assessment to sustainable, audit-ready compliance.
Discovery & AI System Inventory
Comprehensive audit of all AI systems, business processes, and current compliance posture. We conduct stakeholder interviews, technical reviews, and documentation analysis to map your complete AI landscape.
Risk Classification & Obligation Mapping
Categorise each system by EU AI Act risk tier using our proprietary framework. Deliver a clear compliance matrix mapping specific regulatory requirements to implementation actions, owners, and deadlines.
Roadmap & Governance Design
Develop a prioritised implementation roadmap with milestones, resource estimates, and timelines. Design governance structures, documentation templates, and workflow processes tailored to your organisation.
Execution, Testing & Documentation
Hands-on implementation alongside your teams: building controls, creating required documentation, conducting bias and explainability tests, and preparing for conformity assessments and regulatory interactions.
Continuous Monitoring & Improvement
Establish sustainable processes for post-market monitoring, incident reporting, regulatory change tracking, and iterative refinement of your AI governance as both your systems and the regulatory landscape evolve.
Key Obligations
What High-Risk Providers Must Deliver
Providers of high-risk AI systems carry the heaviest compliance burden. Here are the core requirements — and how T3's EU AI Act advisory services help you meet each one.
Risk Management System (Art. 9)
Establish a continuous, iterative risk management system throughout the AI lifecycle. Identify and analyse known and foreseeable risks, estimate their likelihood and severity, and adopt suitable mitigation measures.
Data Governance (Art. 10)
Training, validation, and testing data sets must be relevant, representative, free of errors, and complete. Providers must examine data for possible biases and implement appropriate measures.
Technical Documentation (Art. 11)
Comprehensive documentation demonstrating compliance must be drawn up before market placement — and kept up to date. This covers design, development, testing, and monitoring specifications.
Human Oversight (Art. 14)
High-risk systems must be designed to enable effective human oversight. Individuals must be able to fully understand the system's capabilities and limitations, monitor its operation, and intervene or stop it.
Accuracy, Robustness & Cybersecurity (Art. 15)
Systems must achieve appropriate levels of accuracy, be resilient to errors and inconsistencies, and be resistant to adversarial attacks — including data poisoning and model manipulation.
Quality Management System (Art. 17)
Providers must implement a QMS covering compliance strategy, design control, data management, risk management, post-market monitoring, incident reporting, and communication with competent authorities.
GPAI Models — All Providers (Art. 53)
→ Technical documentation covering training and testing processes
→ Clear instructions for downstream users
→ Compliance with EU copyright law (text and data mining)
→ Summary of training data content published
Systemic Risk Models (Art. 55)
→ Comprehensive model evaluations including adversarial testing
→ Red teaming and incident tracking
→ Reporting serious incidents to the AI Office
→ Adequate cybersecurity protections in place
Prohibited AI Systems (Chapter II, Art. 5)
Banned outright: subliminal manipulation, exploitation of vulnerabilities, social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), emotion recognition in workplaces and educational institutions, untargeted facial scraping for databases, AI-assisted individual crime prediction based on profiling.
High-Risk AI — Annex III Use Cases
Biometric identification and categorisation · Critical infrastructure management · Education and vocational training · Employment and worker management · Access to essential services · Law enforcement · Migration and border control · Administration of justice
Governance Structure
The AI Office (European Commission) · AI Board (Member State representatives) · Scientific Panel of independent experts · Advisory Forum (industry, civil society, academia) · National competent authorities and market surveillance authorities
GPAI Obligations
General-Purpose AI & Systemic Risk
The EU AI Act introduces a distinct regime for GPAI models — with additional requirements for those classified as posing systemic risk.
All GPAI Models
→ Technical documentation covering training and testing processes
→ Clear instructions for downstream users
→ Compliance with EU copyright law
→ Summary of training data content published
+ Systemic Risk Models
→ Comprehensive model evaluations
→ Adversarial testing and red teaming
→ Incident tracking and reporting to AI Office
→ Adequate cybersecurity protections
Sector Expertise
Deep Industry Knowledge Across High-Stakes Sectors
EU AI Act compliance looks very different in healthcare than in financial services or manufacturing. Our sector specialists bring domain context that generic legal advice cannot — so obligations translate into practical steps your teams can actually implement.
Healthcare
MDR intersection, clinical risk AI
Financial Services
Credit scoring, AML, fraud detection
Technology
GPAI, SaaS AI, product compliance
Manufacturing
Safety systems, robotics, Annex I
Public Sector
Procurement, law enforcement AI
Retail
Recommender systems, pricing AI
Proven Impact
Client Success Stories
Financial Services
Enhancing AI Literacy at a European Bank
A bank deploying AI-driven solutions needed to align staff competencies with EU AI Act requirements. We assessed literacy levels, defined tailored learning objectives, developed e-learning modules and workshops, and established feedback loops for continuous improvement.
Result: Increased staff confidence, reduced AI-related risks, and stronger competitive advantage through demonstrated compliance leadership.
Technology
Operationalising Responsible AI at a Tech Firm
A large technology company needed to augment its Responsible AI framework to meet evolving regulations while unlocking new revenue opportunities. We assessed governance gaps, enhanced RAI principles, formed an AI Ethics Board, and embedded fairness and impact testing across the product lifecycle.
Result: Streamlined compliance processes with measurable ROI, ongoing user confidence, and market-leading positioning on AI governance.
Key Aspects
The Four Pillars of EU AI Act Compliance
The regulation is built around four structural pillars — each with distinct requirements that shape how providers, deployers, and importers must operate.
Conformity Assessments
High-risk AI systems must undergo conformity assessments before being placed on the EU market. Depending on the risk category and sector, this may involve internal assessment or third-party notified body review.
Providers of stand-alone high-risk AI systems must also register their systems in an EU-wide public database before market placement.
Risk Management Requirements
Providers must implement strong controls across the full AI lifecycle, including:
Bans and Restrictions
The EU AI Act introduces strict prohibitions on AI systems considered unacceptable due to the risks they pose to fundamental rights and safety. These are banned outright from the EU market:
Governance and Fines
The EU AI Act establishes a multi-level governance structure with strong enforcement — mirroring GDPR in scale and ambition:
Fine Tiers
Prohibited AI violations: €35M or 7% global turnover
High-risk non-compliance: €15M or 3% global turnover
Incorrect information: €7.5M or 1.5% global turnover
Why T3
Why T3 for Your EU AI Act Readiness Assessment?
T3 is an award-winning Responsible AI advisory and implementation partner that translates cutting-edge AI into responsible, scalable business value. We bridge business ambition with engineering excellence.
Shaped Major Global Standards & Policy
Contributed to the EU AI Act, ISO/IEC 42001, NIST AI RMF, and OECD AI Principles — we helped write the rules you need to follow.
Advised 2/3 of the World's Leading Big Tech Organisations
Embedded experience at the world's most scrutinised AI organisations gives us unmatched practical insight into what compliance actually requires at scale.
Trained 50+ Board Members, Advised 20+ Governments
From C-suite AI literacy programmes to national policy frameworks, our advisory spans the full governance spectrum — from boardroom to regulation.
Led by Senior AI Operators
Founded by the founder of Google's Responsible Innovation & Ethical ML teams and Oracle's former Chief Data Scientist — practitioners who built AI at global scale.
High-Risk AI Systems
Chapter III: Classification & Obligations
High-risk AI systems face the most demanding compliance requirements under the EU AI Act. Classification is determined by Article 6 and the system's intended use case.
Classification (Art. 6)
A system is high-risk if it falls within:
Carve-outs apply if the system performs only a narrow procedural task, cannot influence the outcome of decisions, or is designed for preparatory tasks only.
Annex III Use Cases
Provider Obligations (Arts. 8–17)
EU Database & Governance
EU Database Requirement
Providers of stand-alone high-risk AI systems must register their systems in the EU-wide public database before placing them on the market.
Governance Tools (Chapters V & VI)
Common Questions
EU AI Act: Frequently Asked Questions
The EU AI Act applies to providers, deployers, importers and distributors of AI systems — regardless of where they are based, if their AI system's output is used within the EU. It mirrors GDPR's extra-territorial reach. If you develop, sell, use or procure AI systems affecting EU users, you are likely in scope.
Prohibitions on banned AI practices and AI literacy obligations (Article 4) have applied since February 2025. GPAI model obligations apply from August 2025. The bulk of high-risk AI requirements and the enforcement regime take effect from August 2026, with full application — including Annex I products — from August 2027.
Fines for deploying prohibited AI systems can reach €35 million or 7% of global annual turnover (whichever is higher). Violations of other high-risk obligations can attract fines of up to €15 million or 3% of global turnover. Providing incorrect information to authorities carries fines up to €7.5 million.
An initial readiness assessment and risk classification typically takes 4–8 weeks. Full compliance implementation for high-risk AI systems — including governance design, documentation, and conformity assessment preparation — typically takes 6–18 months depending on the number and complexity of systems in scope. We design our programmes to meet your specific deadlines.
A provider develops or places an AI system on the market. A deployer uses an AI system under their own authority in a professional context. Providers carry the heaviest obligations (conformity assessments, technical documentation, QMS), while deployers have distinct duties around human oversight, fundamental rights impact assessments, and ensuring staff AI literacy.
Get Started Today
High-risk AI obligations arrive August 2026.
Is your business ready?
Book a free 30-minute EU AI Act consultation with our experts. We'll assess your current position, identify priority obligations, and outline a clear path to compliance.
Or email us directly at contact@t-3.ai
AI Regulation
EU AIAI AI Act
T3’s AI compliance advisors are helping shape the future of EU AI regulation. The EU AI Act introduces groundbreaking measures such as bans on biometric surveillance and mandatory disclosure requirements for AI-generated content. The regulation is expected to come into force soon, with implementation anticipated by the end of 2027.
The EU AI Act
background
The EU AI Act is the first comprehensive legal framework on Artificial Intelligence proposed by the European Commission in April 2021. Its objective is to regulate AI systems across the European Union with a horizontal approach, meaning it applies to all sectors and technologies that fall within its scope.
Similar to the GDPR’s global impact on privacy law, the EU AI Act is expected to become a worldwide benchmark for AI regulation.
In June 2023, the European Parliament approved amendments to the Commission’s proposal, opening the way for negotiations between EU Member States and the Parliament on the final legislative text
Key Aspects
CONFORMITY ASSESSMENTS
High-risk AI systems must undergo conformity assessments before being placed on the EU market. Depending on the category, this assessment may be carried out by the provider themselves or by an independent third-party notified body.
RISK MANAGEMENT REQUIREMENTS
Providers must implement strong controls including:
- Data governance measures
- Technical documentation
- Transparency obligations
- Human oversight mechanisms
- Accuracy, robustness, and cybersecurity safeguards
BANS AND RESTRICTIONS
- The EU AI Act introduces strict prohibitions on AI systems considered unacceptable due to the risks they pose to fundamental rights and public safety.
- Providers of stand-alone high-risk AI systems must register their systems in an EU-wide public database to increase transparency and regulatory oversight.
GOVERNANCE AND FINES
The EU AI Act establishes a governance structure including a European Artificial Intelligence Board, responsible for consistent implementation across Member States.
The regulation also introduces fines similar in scale to GDPR, ensuring strong enforcement mechanisms for non-compliance.
Why T3 for AI Readiness Assessment?
T3 is an award-winning Responsible AI advisory and implementation partner that translates cutting-edge research into practical, safe, deployable AI systems.
- Shaped major global standards and policy (EU AI Act, ISO/IEC 42001, NIST AI RMF, OECD AI Principles, G7 AI Code of Conduct)
- Advised 2/3 of the world’s leading Big Tech organisations
- Trained 50+ board members and advised 20+ governments
- Led by senior AI operators: the founder of Google’s Responsible Innovation & Ethical ML teams (Responsible AI at scale) and Oracle’s former Chief Data Scientist (global AI/ML build-out)
- Winner of 3 AI awards in 2025 (including AI Leader of the Year, Top 33 Women Shaping the Future of Responsible AI, and North America AI Leader of the Year)
We bridge business ambition with engineering excellence.
Our Proven Methodology
Discovery & Assessment
We begin with a comprehensive audit of your AI systems, business processes, and current compliance posture. This includes stakeholder interviews, technical reviews, and documentation analysis to understand your unique regulatory landscape.
Risk Classification
Using our proprietary framework, we categorize each AI system according to EU AI Act risk tiers and identify specific compliance obligations. We deliver a clear compliance matrix mapping requirements to implementation actions.
Implementation Planning
We develop a prioritized roadmap with clear milestones, resource requirements, and timelines. This includes governance structure design, documentation templates, and process workflow recommendations tailored to your organization.
Execution & Documentation
Our experts work alongside your teams to implement controls, create required documentation, and establish monitoring systems. We provide hands-on support during conformity assessments and regulatory interactions.
Continuous Improvement
Compliance is an ongoing journey. We establish sustainable processes for maintaining compliance, monitoring regulatory changes, and adapting your AI governance as your systems and the regulatory landscape evolve.
Comprehensive AI Act Services
what we offer
T3 delivers specialized consulting services across the entire AI compliance lifecycle. Our team combines deep regulatory expertise with practical implementation experience to help organizations navigate complexity efficiently.
More Details
GOVERNANCE AND FINES
The EU AI Act establishes a governance structure including a European Artificial Intelligence Board, responsible for consistent implementation across Member States.
The regulation also introduces fines similar in scale to GDPR, ensuring strong enforcement mechanisms for non-compliance.
EU DATABASE REQUIREMENT
Providers of stand-alone high-risk AI systems must register their systems in an EU-wide public database to increase transparency and regulatory oversight.
WHO NEEDS TO ACT?
The EU AI Act applies to a wide range of actors involved in the AI value chain, including:
- Providers
- Authorised representatives
- Importers
- Distributors
- Deployers / Users
Extra-territorial scope
The Act has extra-territorial reach. It applies not only to AI providers established in the EU, but also to providers and users outside the EU if the AI system’s output is used within the Union.
GPAI + SYSTEMIC RISK
General Purpose AI (GPAI)
Providers of General Purpose AI models must comply with additional obligations, including:
- Technical documentation
- Clear instructions for downstream users
- Compliance with EU copyright law
- Publication of a summary of training data content
Systemic risk models
GPAI models classified as posing systemic risk must also implement:
- Model evaluations
- Adversarial testing
- Incident reporting
- Cybersecurity protections
PROHIBITED AI SYSTEMS (ART. 5 FULL)
Prohibited AI systems (Chapter II, Art. 5)
- AI systems using subliminal techniques to distort behaviour and cause harm
- AI exploiting vulnerabilities due to age, disability, or socio-economic situation
- Social scoring systems used by public authorities
- Untargeted scraping of facial images from the internet to build biometric databases
- Emotion recognition systems in workplaces and educational institutions (except for medical/safety reasons)
- Biometric categorisation systems using sensitive characteristics
- Predictive policing based solely on profiling or personal traits
- Real-time remote biometric identification in public spaces (only under narrow exceptions)
Only allowed where failure to act would cause serious harm, subject to authorisation, registration, and strict necessity/proportionality safeguards.
Provider obligations (Arts. 8–17)
Providers must ensure:
- Risk management system
- Data governance
- Technical documentation
- Record-keeping
- Instructions for use
- Human oversight
- Accuracy, robustness, cybersecurity
- Quality management system
Annex III Use Cases
High-risk areas include:
- Critical infrastructure
- Education and training
- Employment and worker management
- Essential private/public services
- Law enforcement
- Migration, asylum, border control
- Administration of justice and democracy
Deep Industry Expertise Across High-Stakes Sectors
Change Isn’t the Challenge. Poor Change Execution Is.
We bring specialized knowledge of how the EU AI Act impacts different industries, from healthcare and financial services to manufacturing and public sector applications.
Health care
Technology & Software
Manufacture
Finance
Retail
Want to hire
EU AI Act Expert?
Book a call with our experts