top of page

AI for Compliance
How artificial intelligence helps with compliance with EU regulations – and where its limits lie. Practical insights into the EU AI Act, GDPR, NIS2, CRA, Data Act, DORA and other regulations for product managers, CTOs , founders and decentral Compliance Champion without a legal background.


Article 12 in Practice: How to Build a Logging System for High-Risk AI
In our previous article, we explained what Article 12 requires — the three purposes of logging, the role responsibilities, the biometric exception, and the unresolved GDPR tension. Now the question is: how do you actually build it? Article 12 is a technical construction requirement. That means the answer is not a document — it is an architecture. In this article, we walk through the complete logging system for TalentMatch AI, layer by layer: from infrastructure through event
cici BEL
18 hours ago11 min read


Article 11 in Practice: How to Build Technical Documentation for High-Risk AI
In our previous article, we explained what Article 11 requires — the 9 chapters of Annex IV, the SME track, and how the technical documentation connects to almost every other obligation in the AI Act. Now the question is: how do you actually build it? Theory is useful. But when you sit down to create your technical documentation, you need a concrete process — not just a list of requirements. In this article, we walk through the entire documentation process step by step, using
cici BEL
Apr 2912 min read


Article 10 in Practice: How to Build Compliant Data Governance for High-Risk AI
Article 10 in Practice: How to Build Compliant Data Governance for High-Risk AI Article 10 tells you what compliant data governance looks like. But knowing the requirements and actually implementing them are two different things. How do you document design decisions in practice? How do you examine datasets for bias? How do you map your data to the stakeholders it affects? This guide answers those questions with a step-by-step methodology you can follow today. We'll use a runn
cici BEL
Apr 2312 min read


Article 9 EU AI Act in Practice: How to Build a Risk Management System for High-Risk AI
You've run the analysis. Your AI system is classified as High-Risk . Now what? If you've followed our KI-Verordnung Framework guide, you know the four steps to clarity: 1. Scope Check — Does the AI Act apply? ✓ 2. Risk Classification — What's the risk level? ✓ 3. Role Determination — Provider, Deployer, or both? ✓ 4. Obligations — What must you do? ← You are here For High-Risk AI systems, one of the most critical obligations is Article 9: the Risk Management System (RMS) . In
Joe Simms
Apr 158 min read


Why You Shouldn't Use ChatGPT, Claude & Co. as Your EU AI Act Compliance Guide
• Modern LLMs (ChatGPT, Claude, Gemini, Grok) will be better than ever by 2026 — but they aren’t designed for compliance
• The problem is no longer obvious errors, but errors that sound convincing
• Without source references, an audit trail and project-specific context, LLM responses are unusable for the AI Regulation
- Alternatives: Research official sources yourself, use specialised tools with RAG, or consult experts
Joe Simms
Mar 278 min read
bottom of page