Article 9 EU AI Act: The Risk Management System Explained
- jimsigne
- Apr 7
- 8 min read
180 pages. 113 articles. And if you're building a High-Risk AI system, Article 9 might be the most important one you need to understand.
Why? Because Article 9 doesn't ask you to tick a box once. It requires you to build and maintain a Risk Management System (RMS) — a continuous, adaptive process that spans the entire lifecycle of your AI system.
If Article 50 is about transparency (telling people they're interacting with AI), Article 9 is about responsibility: systematically identifying, assessing, and mitigating the risks your AI system poses to people and society.
In this article, we break down Article 9 into actionable components, explain the key concepts, and show you how to approach risk management — even before the harmonised standards are finalised.
What is Article 9?
Article 9 of the EU AI Act establishes the requirements for a Risk Management System (RMS) for High-Risk AI systems. But it's not just a checklist — it's a fundamentally different approach to how we think about AI safety.
The Key Shift: From System Risk to Societal Impact
Here's something critical that many teams miss:
ISO vs. AI Act — Different Definitions of "Risk"
When ISO/IEC standards refer to risk, they typically mean the impact on the AI system itself — things like system performance, accuracy, or reliability.
The AI Act defines risk differently: the negative impact that an AI system could have on individuals and societies — health, safety, and fundamental rights.
This distinction matters. You're not just protecting your system. You're protecting people.Who Does Article 9 Apply To?
Article 9 applies to providers of High-Risk AI systems. This includes:
AI systems listed in Annex III (biometrics, critical infrastructure, employment, education, essential services, law enforcement, etc.)
AI systems that are safety components of products covered by existing EU product safety legislation
If your system falls under these categories, a Risk Management System isn't optional — it's mandatory.
The 3 Pillars of Article 9 EU AI Act
Article 9 can be understood through three interconnected pillars:

Pillar | Focus | Key Question |
|---|---|---|
1. Lifecycle Process | Continuous, adaptive governance | Is risk management embedded at every stage? |
2. Risk Identification | Known and emerging risks | hat could go wrong — and what might we not have anticipated? |
3. Post-Market Monitoring | Ongoing data collection and mitigation | Are we learning from real-world deployment? |
Let's examine each pillar in detail.
Pillar 1: A Dynamic, Continuous Process
The first requirement of Article 9 is clear: risk management must be dynamic, planned, and continuous throughout the entire lifecycle of your AI system.
This is not a one-time assessment before launch. It's an ongoing commitment.
Adaptive Governance
Adaptive governance means your risk management structures must evolve and adjust in response to:
New information from deployment
Changing circumstances in how the system is used
Emerging risks that weren't anticipated during development
This approach acknowledges a fundamental truth about AI: technological advances and societal impacts are rapid and unpredictable. Your governance must keep pace.
Lifecycle Risk Management
Risk management must be embedded at every stage:
Stage | Risk Management Activities |
Design | Identify potential risks, design for safety |
Development | Test for known risks, document decisions |
Deployment | Validate in real-world conditions |
Monitoring | Collect performance data, identify new risks |
Updates | Re-assess risks after any modification |
Decommissioning | Ensure safe phase-out |
Practical Example
Example: Predictive Maintenance AI
A company deploys an AI system for predictive maintenance in manufacturing. After market release, field data reveals a new type of equipment failure that wasn't anticipated during initial testing.
Following Article 9, the company:
1. Documents the newly identified risk
2. Initiates a model update
3. Re-validates the system
4. Updates the technical documentation
This is adaptive governance in action — the system evolves based on real-world learning.Pillar 2: Risk Identification & Assessment
The second pillar focuses on what risks to look for and how to evaluate them.
Known Risks vs. Emerging Risks
Article 9 requires you to identify two categories of risks:

Known Risks:
Previously identified and documented
Addressed through established mitigation strategies
Continuously monitored for effectiveness
Examples: data privacy violations, algorithmic bias, system failures
Emerging Risks:
Not yet fully understood or previously encountered
May arise from novel technologies, shifting societal expectations, or evolving threat landscapes
Require proactive identification and innovative mitigation strategies
Examples: unexpected model behaviour after updates, new attack vectors, unforeseen use patterns
The Subjectivity Challenge
Here's where it gets nuanced:
Context-Specific Interpretation
The interpretation of risk management must be context-specific, guided by:
• Intended purpose: What is the system designed to do?
• State of the art: What's technically possible today?
• Reasonably foreseeable misuse: How might people use it in ways you didn't intend?
This inherently subjective element places significant responsibility on AI providers to determine appropriate risk mitigation strategies. There's no universal checklist — you must reason through your specific context.Practical Example
Example: Medical Chatbot with Self-Learning Model
A natural language processing AI system is deployed as a medical chatbot, initially based on a validated static model. After a software update integrating a self-learning component, the system begins providing increasingly personalised — but unverified — medical advice.
This newly introduced behaviour wasn't anticipated in the original design. It's an emerging risk.
Required response:
• Introduce post-deployment verification tests
• Implement technical safeguards (real-time validation filters)
• Add human-in-the-loop controls for medical recommendations
• Document the risk and mitigation in the RMSPillar 3: Post-Market Monitoring & Mitigation
The third pillar addresses what happens after deployment.
Article 9 requires providers to systematically:
1. Collect performance data from real-world use
2. Document findings and anomalies
3. Analyse data to identify new risks
4. Mitigate risks through appropriate measures
What Data to Collect
Post-market monitoring should include:
System logs and performance metrics
User feedback and complaints
Environmental factors affecting system behaviour
Incident reports
Accuracy and reliability trends over time
Connection to Articles 9(4) and 9(5)
Once risks are identified through monitoring, risk management measures must be implemented in accordance with Articles 9(4) and 9(5). This creates a feedback loop:
```
Deploy → Monitor → Identify Risks → Mitigate → Document → Continue Monitoring
```
This isn't a linear process — it's a continuous cycle.
Risk Prioritisation & The Mitigation Hierarchy
Here's a truth the AI Act acknowledges explicitly: no AI system is ever completely risk-free.
Given limited resources and the impossibility of eliminating all risks, how do you decide what to address first?
Prioritise by Severity and Probability
The most severe and probable risks must be addressed first. This is the core principle of risk-based regulation.
The Mitigation Hierarchy
The AI Act establishes a clear hierarchy for risk mitigation:

Priority | Approach | Example |
1 | Eliminate through design | Remove the risky feature entirely |
2 | Technical safeguards | Implement validation filters, rate limits |
3 | Organisational measures | Human oversight, approval workflows |
4 | User instructions | Training, documentation, warnings |
5 | Accept residual risk | Document and justify remaining risks |
The key principle: risk elimination through design takes precedence over ex-post measures like user instructions.
Residual Risks: What Remains
Residual risks are those that remain after all feasible mitigation measures have been applied. If residual risks remain, the decision to accept them must be:
1. Carefully documented
2. Justified by the state of the art in technology
3. Aligned with societal expectations (values, fundamental rights)
4. Supported by comprehensive documentation including assessment, trade-offs, and validation
Important: The acceptability of a residual risk should not rest solely on the unilateral judgement of the provider. Transparency and justification are required.Testing Requirements
The final part of Article 9 addresses testing procedures — a crucial element of compliance.
When and How to Test
Testing must occur at multiple stages:
During development (robustness testing, bias detection)
Before deployment (real-world validation)
After deployment (ongoing performance monitoring)
After any significant update (re-validation)
Real-World Testing (Article 60)
For many High-Risk systems, testing in controlled environments isn't sufficient. Article 60 allows for real-world testing under specific conditions, with appropriate safeguards.
Predefined Metrics
Tests must be carried out against predefined metrics that are relevant to your system's intended purpose. Examples include:
False positive/negative rates
Diagnostic coverage rate
Response time
Accuracy across different user groups
Robustness against adversarial inputs
Practical Example
Example: AI-Assisted Medical Diagnosis
A company developing an AI system for image-assisted medical diagnosis:
1. Development phase: Conducts robustness testing using synthetic datasets to detect potential biases
2. Pre-deployment: Performs real-world testing in a partner hospital to assess accuracy on actual cases, with systematic human oversight
3. Predefined metrics: False positive rate, diagnostic coverage rate, response time
4. Documentation: Test results are included in the technical documentation required by Article 9, serving as the basis for compliance documentation under Article 60Connection to Article 17: Quality Management System
The Risk Management System required under Article 9 doesn't exist in isolation.
It is structurally linked to Article 17, which requires providers of High-Risk AI systems to establish a Quality Management System (QMS).
System | Focus | Relationship |
RMS (Article 9) | Identifying and mitigating risks | Feeds risk data into QMS |
QMS (Article 17) | Ensuring consistent quality and compliance | Provides framework for RMS implementation |
Think of it this way: the RMS identifies what risks exist and how to address them. The QMS provides the organisational structure to ensure this happens consistently and is properly documented.
The Standards Question: What's Still Coming
Here's something important to understand:
Harmonised Standards Still in Development
The concrete measures and detailed requirements for Article 9 compliance will be established through harmonised standards currently being developed by CEN-CENELEC JTC 21.
What does this mean for you?
• The principles in this article provide orientation, not final specifications
• Standards will provide more specific guidance on testing methods, documentation requirements, and acceptable risk levels
• Early preparation using these principles will put you ahead when standards are finalised
• TrustTroiAI templates are designed to align with anticipated standard requirements and will be updated as standards are publishedThis doesn't mean you should wait. The core principles of Article 9 are clear:
Continuous, lifecycle-spanning risk management
Identification of known and emerging risks
Post-market monitoring
Risk prioritisation and mitigation hierarchy
Documentation and justification
You can — and should — start building these capabilities now.
How TrustTroiAI Helps You Build Your RMS
Building a Risk Management System from scratch is complex. That's exactly why we built TrustTroiAI.
Our Approach: AI Analyses, Humans Decide, Experts Validate
TrustTroiAI guides you through the Article 9 requirements step by step — with templates, guidance, and expert validation when you need certainty.
Step 1 → Scope Check (Troi):
Describe your AI system in 2-3 sentences. Our system determines whether Article 9 applies to you and identifies your risk category.
Step 2 → Risk Identification:
Based on your system's intended purpose and deployment context, we help you identify known risks and prompt you to consider emerging risks.
Step 3 → RMS Templates:
Get pre-structured templates for your Risk Management System documentation, aligned with anticipated CEN-CENELEC requirements.
Step 4 → Mitigation Planning:
Prioritise identified risks and document your mitigation measures according to the hierarchy established by Article 9.
Finn: Your Risk Management Assistant
Have questions along the way? Finn is always available:
Knowledge Assistant: Ask questions about Article 9, testing requirements, or risk categories — Finn answers clearly and cites the relevant articles.
Situation Assistant: Finn knows your specific project and can answer context-aware questions: "Does this count as an emerging risk for my system?"
Bruno: Expert Validation for High-Stakes Decisions
For High-Risk AI systems, the stakes are high. When you need certainty:
Submit your RMS documentation for expert review
Get written feedback from qualified compliance experts
Have documented validation for audits and regulatory inquiries
→ In our next article, we'll show you exactly how to implement these templates with practical examples and step-by-step guidance.[Link]
Conclusion: Risk Management as Competitive Advantage
Article 9 might seem demanding — and it is. But here's the reframe:
A robust Risk Management System isn't just about compliance. It's about:
Building better AI systems that are safer and more reliable
Protecting your users and the people affected by your AI
Building trust with customers, investors, and regulators
Future-proofing your organisation as AI regulation expands globally
The companies that treat risk management as a core capability — not a checkbox — will have a significant competitive advantage.
Start Your Risk Assessment: Find out in 60 seconds whether Article 9 applies to your AI system — and get started with your Risk Management System.
→ trusttroiai.com/scope-checkDiscover the TrustTroiAI Universe:
Meet Troi, Finn, Bruno, and the other characters that guide you through AI compliance.
Source
[1] Regulation (EU) 2024/1689 — Artificial Intelligence Act
Article 9 (Risk Management System)
Article 17 (Quality Management System)
Article 60 (Testing in Real-World Conditions)
[2] European Commission Joint Research Centre (JRC)
"Analysis of the preliminary AI standardisation work plan in support of the AI Act"
JRC132833, ISBN 978-92-68-03924-3
[3] The Academic Guide to AI Act Compliance
hal-05365570v1
[4] CEN-CENELEC JTC 21
Joint Technical Committee on Artificial Intelligence
Developing harmonised standards for AI Act compliance


Comments