Saltar al contenido

02B — Services

Technical compliance for AI systems.

EU AI Act classification, gap analysis, technical documentation, evidence, and governance. We turn regulation into a plan your team can maintain, aligned with GDPR, NIS2, and ISO 42001 when relevant.

02B — Services

AI Compliance

We turn regulatory obligations into technical work your team can act on: classification, evidence, documentation, controls, and owners. No endless consulting, just maintainable deliverables.

  1. CMP.01

    EU AI Act classification

    Role, risk category, and obligation mapping for AI systems in use or under development.

  2. CMP.02

    Regulatory gap and compliance plan

    Analysis against EU AI Act, GDPR, and NIS2 with realistic milestones, owners, and priorities.

  3. CMP.03

    Technical documentation and evidence

    System files, risk records, controls, traceability, and audit-ready material.

  4. CMP.04

    Governance and ISO 42001

    Supervision processes, post-market review, and alignment with AI management systems.

03 — Why now

Five regulatory milestones already shaping the roadmap.

AI Act, prohibited practices in force, general-purpose models, liability case law and certifiable standards.

  • 01 · General regime2026.08

    EU AI Act: full obligations apply on 2 August 2026.

    Up to €35M or 7% of global turnover in fines. Annex III, Art. 50 transparency duties and the rest of the articles stop being optional. Classifying, documenting and registering high-risk systems is a multidisciplinary project, not a checklist.

    EUR-Lex · Regulation 2024/1689
  • 02 · Prohibited practices2025.02

    Article 5 prohibitions are already enforceable.

    In force since 2 February 2025. Subliminal manipulation, social scoring, untargeted facial scraping, emotion recognition at work or in schools, and sensitive biometric categorisation. This sits in the highest fine bracket of the Regulation.

    EUR-Lex · Art. 5
  • 03 · General-purpose models2025.08

    Obligations for general-purpose AI models are already in force.

    Art. 51-56 — applicable since 2 August 2025. Technical documentation (Annex XI), downstream information (Annex XII), copyright policy and a training-data summary. Above 10²⁵ FLOPs you are a systemic-risk provider with mandatory red-teaming.

    EUR-Lex · Chapter V
  • 04 · Liability2024.02

    Courts: the company is liable for what its chatbot says.

    Moffatt v. Air Canada — binding precedent. The tribunal rejected the argument that AI is a separate entity from the company. Without audited guardrails and a decision log, every model output puts capital and reputation on the line.

    Civil Resolution Tribunal
  • 05 · Standards2024

    ISO/IEC 42001 is now the reference AI management system standard.

    First international certifiable standard for AI governance. It maps to the risk-management duties of AI Act Art. 9 and to NIS2 obligations for essential entities. Adopting it makes diligence demonstrable to regulators, auditors and B2B buyers.

    ISO/IEC 42001:2023

04 — Use cases

What we do.

Examples of the type of projects we take on across our two service areas. If your situation looks familiar, we can probably help.

AI Compliance · Regulation

EU AI Act classification for production systems

Problem
Companies already using AI but unclear on their role, risk category, and obligations before August 2026.
How we do it
System classification, obligation map, evidence criteria, and compliance plan with owners.

Client types

Scaleups, industrial SMBs, SaaS companies, teams using AI in support, HR, or operations.

Similar challenge? Let's talk →

AI Compliance · Documentation

Technical documentation and evidence pack

Problem
Teams that need to show how their AI system works, which risks it controls, and what traceability it keeps.
How we do it
System files, risk register, controls, required logs, responsibility matrix, and audit-ready documentation.

Client types

Regulated businesses, technology vendors, agencies delivering AI solutions to clients.

Similar challenge? Let's talk →

AI Compliance · Governance

GDPR, NIS2 and ISO 42001 alignment

Problem
AI systems processing personal data, operating in critical contexts, or needing to fit a formal management system.
How we do it
Privacy review, operational security, human oversight, internal responsibilities, and post-market review process.

Client types

Healthtech, fintech, legaltech, mid-size companies, and teams preparing audits.

Similar challenge? Let's talk →

FAQ

What does the EU AI Act require from businesses?

The EU AI Act imposes differentiated obligations depending on the company's role (provider, deployer, or importer) and the risk category of the system. High-risk systems — in areas such as HR, credit, healthcare, or critical infrastructure — require conformity assessment, technical documentation, registration in the EU database, human oversight, and risk management. Limited-risk systems have lighter transparency obligations. Prohibited applications, such as generalised social scoring or subliminal manipulation, are directly banned.

When does the EU AI Act come into full effect?

The EU AI Act entered into force on 1 August 2024 with a phased application schedule. Prohibited practices apply from 2 February 2025. Obligations for high-risk systems apply in full from 2 August 2026. Penalties for non-compliance can reach €35 million or 7% of global annual turnover, whichever is higher.

What is a high-risk AI system under the EU AI Act?

The EU AI Act classifies as high-risk AI systems used in eight areas: critical infrastructure, education, employment and worker management, access to essential services (credit, insurance), law enforcement, border management, administration of justice, and democratic processes. Systems that constitute safety components of products regulated by European sectoral legislation are also high-risk. The classification determines the full set of applicable obligations.

What technical documentation does the EU AI Act require for high-risk systems?

The EU AI Act requires providers of high-risk systems to maintain technical documentation covering: system description and intended purpose, training data and evaluation methodology, performance metrics and accuracy thresholds, human oversight measures, risk analysis and mitigation plans, instructions for use, and a change log. This documentation must be available to supervisory authorities throughout the system's lifecycle.

What fines does the EU AI Act set for non-compliance?

The EU AI Act sets three penalty levels: up to €35 million or 7% of global annual turnover for using prohibited practices; up to €15 million or 3% of global turnover for breaching other regulation obligations; and up to €7.5 million or 1.5% of global turnover for providing incorrect information to authorities. For SMEs and startups, penalties are calculated on whichever figure — percentage of turnover or fixed amount — is lower.

In-house training

Want your team to learn how to do this?

We run in-house technical training, not open courses. Each course is designed around the client's case: EU AI Act classification, audit-ready technical documentation and governance for engineering, compliance and legal teams, with the scope the team asks for.

Let's talk about your AI system.

We respond within 24 – 48 business hours. We'll suggest a first call to understand your case.

[email protected]
Start the conversation