top of page


Article 12 EU AI Act: Record-Keeping & Logging Explained
Your AI system makes thousands of decisions per day. It screens candidates, scores credit applications, flags anomalies, or suggests diagnoses. Now imagine a regulator asks you to reconstruct a single decision from six months ago — the inputs, the logic, the output, and who reviewed it. Can you? That is what Article 12 of the EU AI Act demands. And unlike Article 11, which requires documentation on paper, Article 12 is a technical construction requirement. Your system itself
cici BEL
20 hours ago13 min read


Article 9 EU AI Act in Practice: How to Build a Risk Management System for High-Risk AI
You've run the analysis. Your AI system is classified as High-Risk . Now what? If you've followed our KI-Verordnung Framework guide, you know the four steps to clarity: 1. Scope Check — Does the AI Act apply? ✓ 2. Risk Classification — What's the risk level? ✓ 3. Role Determination — Provider, Deployer, or both? ✓ 4. Obligations — What must you do? ← You are here For High-Risk AI systems, one of the most critical obligations is Article 9: the Risk Management System (RMS) . In
Joe Simms
Apr 158 min read


Why You Shouldn't Use ChatGPT, Claude & Co. as Your EU AI Act Compliance Guide
• Modern LLMs (ChatGPT, Claude, Gemini, Grok) will be better than ever by 2026 — but they aren’t designed for compliance
• The problem is no longer obvious errors, but errors that sound convincing
• Without source references, an audit trail and project-specific context, LLM responses are unusable for the AI Regulation
- Alternatives: Research official sources yourself, use specialised tools with RAG, or consult experts
Joe Simms
Mar 278 min read
bottom of page