Article 12 EU AI Act: Record-Keeping & Logging Explained
- cici BEL
- 20 hours ago
- 13 min read
Your AI system makes thousands of decisions per day. It screens candidates, scores credit applications, flags anomalies, or suggests diagnoses. Now imagine a regulator asks you to reconstruct a single decision from six months ago — the inputs, the logic, the output, and who reviewed it.
Can you?
That is what Article 12 of the EU AI Act demands. And unlike Article 11, which requires documentation on paper, Article 12 is a technical construction requirement. Your system itself must be built to log events automatically. Not as a reporting layer added after launch — as a core capability in the system architecture from day one. This article breaks down what Article 12 requires, who is responsible for which part of the logging infrastructure, and a tension that the regulation has not yet resolved: how to reconcile comprehensive AI logging with GDPR data minimisation.
What Is Article 12 EU AI Act?
Article 12 of the EU AI Act (Regulation 2024/1689) establishes the obligation for providers of high-risk AI systems to enable automatic recording of events — commonly referred to as logging. Recital 71 (EWG 71) makes the intent explicit: traceable information about the development and operation of AI systems is essential for accountability.
The critical distinction from Article 11: logging is a technical capability, not a document . Article 11 requires you to create a technical documentation file. Article 12 requires you to build your system so that it can automatically record events throughout its entire lifecycle. The documentation describes what the system is. The logs prove what the system does.
Article 12 operates on two levels. Paragraphs 1 and 2 establish general logging requirements for all high-risk AI systems — automatic event recording proportionate to the system's risks and use context. Paragraph 3 adds enhanced requirements specifically for biometric remote identification systems (Annex III, No. 1(a)), including the four-eyes principle under Article 14(5). Most teams building high-risk AI systems will deal with the general requirements. The biometric exception applies to a narrower set of systems.
The Three Purposes of Logging
Recital 71 and the JRC analysis identify three distinct purposes that Article 12 logging must serve. Understanding these purposes shapes every decision about what to log, how to store it, and how long to keep it.
Purpose 1: Traceability. The development and operation of the system must be reconstructable. This means logging not only what the system outputs, but the internal processes that led to those outputs. For a machine learning system, this includes which model version was used, which features were weighted, and what confidence level the system assigned. Traceability is retrospective — it answers the question "what happened and why?" after the fact.
Purpose 2: Conformity Verification. Authorities and notified bodies must be able to verify that the system meets the AI Act requirements during its operation — not just at the moment of conformity assessment. Logs serve as the evidence base for ongoing compliance. If your Article 9 risk management system identifies a risk event, the log must show that the event was detected and handled. Conformity verification is continuous — it answers the question "is the system still compliant?" at any point in time.
Purpose 3: Post-Market Monitoring. After market placement, the provider must monitor the system's performance under Article 72. Logs are the primary data source for this monitoring — they feed anomaly detection, drift analysis, and incident investigation. Post-market monitoring is forward-looking — it answers the question "is something going wrong?" before it becomes a serious incident.

Who Does What: Logging Responsibilities by Role
Logging is not a task for a single team. It spans infrastructure, data science, product, and compliance — and each role contributes a distinct layer. Getting this division right early prevents gaps that become visible only during an audit.
DevOps / Infrastructure Engineers
DevOps owns the logging infrastructure. This means selecting and configuring the logging library (Winston, Logstash, Fluentd, or custom solutions), setting up storage with appropriate encryption and access controls, implementing backup strategies, and ensuring that the logging system itself does not degrade the AI system's performance. The log overhead must be acceptable — a system that slows down by 30% because of logging has a different kind of problem.
DevOps also implements the retention policies technically: auto-delete jobs, archiving strategies, and manual hold functions for incident investigations. The infrastructure must support differentiated retention — risk events kept longer than routine operational logs — while remaining auditable
ML Engineers / Data Scientists
ML Engineers define what gets logged at the model level. This is the most technically demanding part of Article 12 compliance. They must define system-specific event schemas: which events are relevant for this particular AI system? What constitutes a risk event versus routine operation?
For neural network systems, this includes making the decision logic loggable — through SHAP values, attention weights, feature importance scores, or other explainability outputs. The JRC analysis explicitly flags this as a challenge: practical implementation guidance for decision traceability in neural networks is not yet standardised. ML Engineers must also define drift detection events — the metrics and thresholds that indicate the model's behaviour is shifting.
Product Managers / System Owners
Product Managers bridge the gap between regulatory requirements and technical implementation. They align logging events with the Article 9 risk management system: which events are risk-triggering? Which could indicate a substantial modification?
Critically, PMs own the proportionality decision. Recital 71 requires logging to be proportionate to risks and use context. Logging everything is not the answer — it creates storage costs, performance overhead, and GDPR exposure. Logging too little means non-compliance. The PM must define the right balance, informed by the risk assessment and the system's intended purpose.
Compliance Officers / Data Protection Officers
Compliance and DPO own the legal layer. Every log entry that contains or relates to personal data is a processing activity under the GDPR. The Data Protection Officer must ensure that log processing is captured in the Article 30 DSGVO processing register, that appropriate legal bases are identified, that data subject rights (including the right to erasure) are addressed, and that a Data Protection Impact Assessment (DPIA) is conducted where logging involves high-risk personal data processing.
The Compliance Officer ensures that retention periods are defined and justified — balancing the AI Act's requirement for lifecycle-long logging against the GDPR's storage limitation principle. This is the role that must navigate the unresolved tension between both regulations.

What Must Be Logged
The content requirements for Article 12 logging derive from Recital 71, the JRC analysis, and the general structure of Article 12. Four categories of events must be captured:
Risk events. Situations that could cause risks or indicate the need for substantial modifications. These connect directly to your Article 9 risk management system — the risks you identified in your RMS are the events your logging system must capture. For TalentMatch AI, this includes edge cases where the ranking confidence drops below a defined threshold, inputs that trigger fairness metric violations, and out-of-distribution applications that the model was not trained to handle.
Decision logic. The internal processes that lead to outputs. For traditional ML systems, this means input features, model version, feature weights, and output confidence. For neural networks, the JRC specifically calls for practical implementation guidance including SHAP values or similar explainability outputs. The goal: if an incident occurs, the log must enable reconstruction of why the system produced a specific output.
System changes. Model updates, retraining events, API version changes, configuration modifications — any change that could affect the system's conformity status. Each change event should include the old and new version identifiers, who initiated the change, and the validation metrics before and after.
Monitoring metrics. Performance metrics tracked continuously during operation — accuracy, fairness metrics, latency, throughput. These feed the Article 72 post-market monitoring system. Threshold violations trigger alerts and must be logged as distinct events.
All logging must be automatic and continuous throughout the system's lifecycle. Manual logging is not sufficient — Recital 71 frames this as a technical capability that must be built into the system architecture.
The Biometric Exception: Enhanced Requirements
Article 12(3) establishes enhanced logging requirements specifically for high-risk AI systems used for biometric remote identification under Annex III, No. 1(a). These are additional requirements on top of the general logging obligations — four mandatory elements per usage instance:
(a) Usage period: Date and time of the start and end of each use of the system. Not just "the system was active on Tuesday" — precise timestamps marking when each identification session began and ended.
(b) Reference database: Identification of the reference database against which input data was checked. The system must log which database was used for comparison — critical for traceability when multiple databases exist.
(c) Match inputs: The input data that resulted in a match. When the system identifies a person, the specific input that triggered the identification must be stored in the log.
(d) Verification persons: Identification of the natural persons involved in verifying the result, as required under Article 14(5). This is the four-eyes principle: no action or decision may be taken based on the identification without separate confirmation by at least two qualified persons. Both persons must be identified in the log.
The four-eyes principle has one exception: for law enforcement, migration, border control, or asylum contexts, national law may classify the requirement as disproportionate. If this exception applies, it must be documented in the log with the legal basis.
Does this apply to your system?
Most high-risk AI systems — including recruitment, credit scoring, and medical diagnosis — are not biometric remote identification systems. The enhanced requirements of Article 12(3) apply only to systems classified under Annex III, No. 1(a). All other high-risk systems follow the general logging requirements.
The GDPR Tension: Logging vs. Data Minimisation
This is the elephant in the room. And it is honest to say: it has not been fully resolved.
The Conflict
The AI Act requires comprehensive logging — decision logic, input data characteristics, risk events, and incident reconstruction capabilities — potentially for the entire lifecycle of the system plus 10 years retention under Article 18. The GDPR, meanwhile, requires data minimisation (Article 5(1)(c): data must be adequate, relevant, and limited to what is necessary) and storage limitation (Article 5(1)(e): data must not be kept longer than necessary for the purpose).
These two requirements pull in opposite directions. And the tension is not theoretical. Consider TalentMatch AI: when the system ranks a job candidate, the log captures which features influenced the ranking, what the confidence score was, and potentially which protected characteristics triggered fairness checks. That log entry [KURSIV]is[/KURSIV] personal data under the GDPR — it relates to an identified or identifiable natural person whose job application was processed. The AI Act says: log it, keep it, make it reconstructable. The GDPR says: minimise it, limit storage, and respect the right to erasure.
What Is Not Yet Resolved
Several questions remain open as of April 2026:
Retention periods: The AI Act requires logging throughout the system's lifecycle and Article 18 mandates 10-year document retention. The GDPR requires that personal data is not stored longer than necessary. No harmonised guidance defines what "necessary" means for AI logs specifically. Is 10 years of decision logs for a recruitment system justified? No authority has issued a definitive answer.
Legal basis: Which GDPR legal basis applies to log processing? Article 6(1)(c) (legal obligation — the AI Act requires logging) is the most likely candidate, but it has not been formally confirmed by the EDPB. Article 6(1)(f) (legitimate interest) is an alternative, but requires a balancing test that has not been standardised for AI logging.
Right to erasure: If a data subject requests deletion under Article 17 GDPR, does the AI Act logging obligation override it? Article 17(3)(b) GDPR exempts data from deletion when processing is necessary for compliance with a legal obligation — but the scope of this exemption for AI logs has not been tested in practice or case law.
EDPB guidance: The European Data Protection Board has not published specific guidance on AI Act logging and GDPR interaction. Until it does, providers operate in a grey zone where both regulations apply simultaneously without clear prioritisation rules.Practical Recommendations — What You Can Do Now
Despite the regulatory ambiguity, there are concrete steps providers can take to navigate the tension responsibly:
Pseudonymise log data. Replace direct identifiers with pseudonyms in log entries. The log still enables incident reconstruction (the pseudonym maps back to the original data through a separate, access-restricted key), but the log itself does not contain directly identifying personal data. This reduces GDPR exposure without compromising AI Act compliance.
Implement differentiated retention. Not all logs need the same retention period. Risk events and incident-related logs justify longer retention (potentially the full 10 years under Article 18). Routine operational metrics can be retained for shorter periods (6 months to 2 years). Document the justification for each retention tier — proportionality is the key principle under both regulations.
Conduct a DPIA. A Data Protection Impact Assessment under Article 35 GDPR is almost certainly required for logging systems that process personal data in high-risk AI contexts. The DPIA documents the necessity, proportionality, and safeguards of the logging — and serves as evidence that you took both regulations seriously.
Update your processing register. Log processing activities must be captured in your Article 30 GDPR register. Many organisations miss this because they treat logging as a technical function rather than a data processing activity.
Restrict access as a safeguard. The fewer people who can access log data, the easier it is to justify under GDPR. Implement role-based access controls — DevOps for infrastructure, Compliance for audit access, and explicit authorisation for any other access. Log who accesses the logs.
Honest Assessment
The tension between AI Act logging and GDPR data minimisation will likely be resolved through a combination of EDPB guidance, delegated acts, and eventually case law. Until then, the approach above — pseudonymise, differentiate retention, conduct a DPIA, and restrict access — represents a defensible middle ground. But it is a middle ground, not a definitive solution.Logging vs. Documentation: The Difference That Matters
Teams often conflate Article 11 (technical documentation) and Article 12 (logging). They are fundamentally different obligations:
Article 11 is a document. It is a static (though updatable) file that describes the system — its purpose, architecture, risk management, performance metrics, and compliance status. It is created before market placement and updated when the system changes. It lives outside the system.
Article 12 is a technical capability. It is a function built into the system architecture that automatically records events during operation. It is not a file you create — it is a feature you build. Recital 71 makes this explicit: the system must technically enable automatic event recording. Retroactively bolting logging onto a system that was not designed for it may not meet this requirement.
The practical consequence: Article 12 compliance must be considered during system design, not during compliance review. By the time your system is built and running, the logging architecture is either there or it is not. Adding it later is expensive, technically risky, and may not satisfy the "by design" intent of the regulation.
Your Article 11 technical documentation should reference the Article 12 logging specification — what events are logged, how they are stored, and how long they are retained. The log spec is part of the technical file.
The Cross-Reference Web
Article 12 connects to six other AI Act articles and intersects directly with the GDPR:
Connection | Article | What It Means for Logging |
Risk Events | Art. 9 | Risks identified in the RMS define which events must be logged |
Technical Documentation | Art. 11 | Log specifications are part of the Annex IV technical file |
Human Oversight (Biometric) | Art. 14(5) | Four-eyes verification persons must be identified in logs |
Quality Management | Art. 17 | Record-keeping procedures integrated into the QMS |
Conformity Assessment | Art. 43 | Logs serve as evidence base during conformity assessment |
Post-Market Monitoring | Art. 72 | Logs are the primary data source for ongoing monitoring |
Data Minimisation | GDPR Art. 5(1)(c) | Log content must be proportionate — no over-logging |
Processing Register | GDPR Art. 30 | Log processing must be registered |
Right to Erasure | GDPR Art. 17 | Tension with retention obligations — unresolved |
DPIA | GDPR Art. 35 | Logging likely requires a Data Protection Impact Assessment |
The most important insight: Article 12 does not exist in isolation. Your logging system must be designed together with your risk management (Art. 9), documented in your technical file (Art. 11), and reconciled with your data protection framework (GDPR). These are not sequential tasks — they are concurrent design decisions.

Open Questions: What Is Still Missing
Article 12 leaves more open than most other AI Act articles. An honest assessment of the gaps:
JTC 21 logging standards are pending. The JRC analysis explicitly identifies the need for standardised logging specifications — but CEN-CENELEC JTC 21 has not published them yet. Providers must define event schemas, retention policies, and audit tool requirements without a harmonised standard to reference.
Neural network logging is not standardised. How to make deep learning decision processes reconstructable is an active research question, not a solved engineering problem. SHAP values, attention maps, and feature importance scores are useful but imperfect proxies. The regulation requires incident reconstruction capability — the methodology is left to the provider.
"Substantial modification" is not precisely defined. Article 12 requires logging of events that could indicate substantial modifications (Recital 128). But the threshold for what constitutes a substantial modification is not precisely specified. Is a model retrain substantial? A hyperparameter change? A training data update? Providers must make defensible judgment calls.
GDPR harmonisation is pending. As discussed in detail above, the interaction between AI Act logging obligations and GDPR data protection requirements has not been formally harmonised. This is the most significant open question for any provider processing personal data.
Event definitions are system-specific. The JRC analysis explicitly states that providers should define events, metrics, and information system-specifically. There is no universal event catalogue. What constitutes a loggable event for a recruitment system is different from a medical diagnosis system. This is by design — but it means every provider must make their own determination.
How TrustTroiAI Helps
TrustTroiAI's Article 12 template addresses both layers of the logging obligation: compliance documentation and technical implementation.
Part A: Compliance Documentation covers system identification, classification (including biometric system check), deployment architecture, and the formal logging specification. This is the document that auditors and notified bodies review — it describes what your logging system captures and how it is governed.
Part B: Technical Implementation Specification goes deeper — event schemas in JSON format, retention policies with differentiated tiers, and an 8-phase implementation checklist from infrastructure setup through testing and monitoring. This is the document your DevOps and ML Engineering teams work from.
The template handles conditional sections automatically: if your system is classified as biometric remote identification, the enhanced Article 12(3) requirements appear. If it uses neural networks, the explainability logging section activates. If it does not — those sections are excluded.
Build your Article 12 logging specification with TrustTroiAI
From compliance documentation to implementation checklist — both layers in one template.
→ Start your logging specification: trusttroiai.com
→ Next: Article 12 in Practice — How to build a logging system step by stepKey Takeaways
→ Article 12 is a technical construction requirement — your AI system must be built to log events automatically. Retroactive logging add-ons may not satisfy the regulation's intent.
→ Logging serves three purposes: traceability (what happened), conformity verification (is the system still compliant), and post-market monitoring (is something going wrong).#
→ Four roles share logging responsibility: DevOps builds the infrastructure, ML Engineers define event schemas, Product Managers own proportionality decisions, and Compliance Officers navigate the GDPR intersection.
→ The GDPR tension is real and not yet resolved. Pseudonymisation, differentiated retention, and DPIAs are defensible interim measures — but definitive guidance from the EDPB is still missing.
→ Most high-risk AI systems follow general logging requirements. The enhanced biometric requirements (Art. 12(3) with four-eyes principle) apply only to systems under Annex III, No. 1(a).
Source
PRIMÄRQUELLEN:
1. EU AI Act 2024/1689, Article 12 — Record-Keeping
2. EU AI Act 2024/1689, Recital 71 (EWG 71) — Logging Intent
SEKUNDÄRQUELLEN:
3. JRC132833 — Analysis of the preliminary AI standardisation work plan in support of the AI Act
4. GDPR (Regulation 2016/679), Articles 5, 6, 17, 30, 35
VERWANDTE ARTIKEL:
5. EU AI Act 2024/1689, Articles 9, 11, 14(5), 17, 43, 72, Recital 128
STANDARDS:
6. CEN-CENELEC JTC 21 — Logging Standards for AI Act (in development)



Comments