top of page


Article 11 EU AI Act: Technical Documentation Explained
Your AI system works. It passes internal tests. Your team is confident. But can you prove — on paper, to a regulator — that it meets every requirement the EU AI Act sets out? That is what Article 11 demands. Not a product description. Not a slide deck. A comprehensive technical documentation that demonstrates compliance with every requirement in Chapter 2 of the AI Act — before your system reaches the market. Article 11 turns your compliance work into evidence. And Annex IV t
cici BEL
Apr 2910 min read


Article 9 EU AI Act in Practice: How to Build a Risk Management System for High-Risk AI
You've run the analysis. Your AI system is classified as High-Risk . Now what? If you've followed our KI-Verordnung Framework guide, you know the four steps to clarity: 1. Scope Check — Does the AI Act apply? ✓ 2. Risk Classification — What's the risk level? ✓ 3. Role Determination — Provider, Deployer, or both? ✓ 4. Obligations — What must you do? ← You are here For High-Risk AI systems, one of the most critical obligations is Article 9: the Risk Management System (RMS) . In
Joe Simms
Apr 158 min read


Article 9 EU AI Act: The Risk Management System Explained
180 pages. 113 articles. And if you're building a High-Risk AI system, Article 9 might be the most important one you need to understand. Why? Because Article 9 doesn't ask you to tick a box once. It requires you to build and maintain a Risk Management System (RMS) — a continuous, adaptive process that spans the entire lifecycle of your AI system. If Article 50 is about transparency (telling people they're interacting with AI), Article 9 is about responsibility: systematicall
jimsigne
Apr 78 min read


Why You Shouldn't Use ChatGPT, Claude & Co. as Your EU AI Act Compliance Guide
• Modern LLMs (ChatGPT, Claude, Gemini, Grok) will be better than ever by 2026 — but they aren’t designed for compliance
• The problem is no longer obvious errors, but errors that sound convincing
• Without source references, an audit trail and project-specific context, LLM responses are unusable for the AI Regulation
- Alternatives: Research official sources yourself, use specialised tools with RAG, or consult experts
Joe Simms
Mar 278 min read
bottom of page