Publicité
ERP IMPLEMENTATION
🇫🇷 Lire en français

EU AI Act and ERP: Compliance Guide for Businesses in 2026

The European AI Act becomes enforceable in August 2026. Which ERP modules are affected, what obligations for businesses, compliance checklist.

EU AI Act and ERP: Compliance Guide for Businesses in 2026

Regulation (EU) 2024/1689, commonly known as the AI Act, entered into force on 1 August 2024. It’s the world’s first comprehensive legal framework regulating artificial intelligence. Its progressive implementation culminates on 2 August 2026, when obligations on high-risk AI systems become enforceable. For businesses using ERPs with integrated AI functionality (forecasting, scoring, chatbots, HR automation), the question is no longer theoretical: preparation is essential.

This article breaks down the concrete AI Act obligations for ERP vendors and user companies, module by module.

AI Act: Key timeline and obligations recap

Important dates to remember

The AI Act follows a gradual implementation timeline (official text on EUR-Lex):

  • 1 August 2024: regulation entry into force
  • 2 February 2025: prohibition of banned AI practices (Article 5), such as social scoring or subliminal manipulation
  • 2 August 2025: obligations for General Purpose AI (GPAI) model providers, including technical documentation and training data summaries (Baker McKenzie, August 2025)
  • 2 August 2026: application of high-risk AI system rules (Annex III) and transparency obligations (Article 50)
  • 2 August 2027: mandatory compliance for GPAI models placed on the market before August 2025

For a CIO or DPO, the critical date is August 2026: that’s when sanctions become applicable to high-risk AI systems deployed in the enterprise.

The 4 risk levels and AI embedded in ERPs

The AI Act classifies AI systems into four categories:

  1. Unacceptable risk (prohibited): social scoring, behavioral manipulation. No serious ERP embeds this type of functionality.
  2. High risk (Annex III): systems subject to strict compliance, documentation and human oversight requirements. Certain ERP modules fall into this category.
  3. Limited risk: systems subject to transparency obligations (Article 50). Most ERP AI assistants and chatbots fall here.
  4. Minimal risk: no specific obligations. Anti-spam filters or automatic document categorization, for example.

The difficulty for businesses: a single ERP may contain modules falling under different risk levels. It’s not sufficient to classify “the ERP” into one category—each AI functionality must be examined separately.

Which ERP modules are affected by the AI Act?

Supplier scoring and automated credit scoring: potential high risk

Annex III of the AI Act explicitly classifies as “high risk” AI systems intended to “evaluate the creditworthiness of natural persons or establish their credit score” (Annex III, point 5(b)). If your ERP uses a scoring algorithm to automatically evaluate supplier or customer reliability, you must verify whether this scoring covers natural persons.

Obligations for high-risk systems are substantial: risk management system, quality training data, detailed technical documentation, operation logging, human oversight, accuracy and robustness.

Concretely: if your ERP automatically assigns a credit score to an individual customer or sole trader, this module likely falls under the high-risk regime. Conversely, scoring covering exclusively legal entities is not targeted by this provision.

Demand forecasting and AI planning: limited risk

Demand forecasting modules, automatic stock planning, or supply chain optimization use predictive models but generally don’t concern natural persons. These systems don’t appear in Annex III and fall under limited or minimal risk.

The main obligation: if the system generates content or recommendations presented to a user, the company must inform this user that they’re interacting with an AI system (Article 50).

Internal chatbots and AI assistants: transparency obligation

SAP Joule, Microsoft Copilot in Dynamics 365, Oracle Digital Assistant, Sage Copilot, Odoo AI: all major vendors now integrate conversational assistants in their ERPs. Article 50 of the AI Act imposes a clear obligation: “Providers shall ensure that AI systems intended to interact directly with natural persons are designed in such a way that the persons concerned are informed that they are interacting with an AI system” (Article 50, AI Act).

In December 2025, the European Commission published a first draft Code of Practice on AI-generated content transparency, with a final version expected for June 2026, just before enforcement (European Commission, December 2025).

What this changes: an ERP chatbot must clearly indicate that it’s an AI. The content it generates (emails, reports, summaries) must be identifiable as AI-generated in machine-readable format.

AI-assisted recruitment and HR evaluation: confirmed high risk

This is the ERP use case most clearly targeted by the AI Act. Annex III unambiguously classifies as “high risk” AI systems “intended to be used for the recruitment or selection of natural persons, notably for placing targeted job advertisements, analysing and filtering applications, and evaluating candidates” (Annex III, point 4).

HR/HRIS modules integrated into an ERP that use AI for CV sorting, performance evaluation or promotion decision support are directly concerned. Oracle HCM Cloud and SAP SuccessFactors, which already integrate AI-powered talent acquisition features, will need to document and certify compliance of these functionalities.

What ERP vendors have already announced

SAP: ISO 42001 certification and Joule use case classification

SAP obtained ISO 42001 certification for AI governance in Q3 2025, covering Joule, SAP AI Core and SAP AI Launchpad (SAP News Center, Q3 2025). SAP’s AI Ethics Handbook introduces three risk categories (standard, high-risk, red-line), with appropriate oversight levels for each category.

SAP states that customer data remains in their own tenancy and is never used to train third-party models. Automatic data purging after log rotation and GDPR compliance measures are integrated into Joule (SAP, AI Ethics page).

This is the vendor with the most structured approach at this stage, consistent with its enterprise customer base subject to strong regulatory requirements.

Microsoft: Copilot and transparency in Dynamics 365

Microsoft has published 33 Transparency Notes since 2019, detailing capabilities and limitations of its AI tools (Microsoft Trust Center, EU AI Act page). Dedicated teams combine AI governance, engineering, legal and public policy to prepare AI Act compliance.

Important point for Dynamics 365 user companies: Copilot is classified as “limited risk” by default (transparency obligation only). But if integrated into a high-risk workflow, such as candidate sorting or performance evaluation, the entire process shifts to high risk. The responsibility then falls on the deployer, i.e., the user company.

Oracle, Sage, Odoo: where do the other players stand?

Oracle has deployed AI features across its cloud applications, including HCM Cloud’s talent acquisition AI and ERP Cloud’s intelligent automation. Oracle’s Responsible AI principles emphasize fairness, accountability and transparency (Oracle AI Ethics page). However, specific AI Act compliance documentation remains limited in public communications.

Sage deployed Sage Copilot for X3 in 2025, first in Early Adopter program then general availability (Inixion, Sage X3 2025 R2). The copilot enables natural language data queries and receives proactive alerts. No public communication on AI Act compliance yet.

Odoo launched the Odoo AI App in April 2025 with machine learning lead scoring, sentiment analysis and automatic writing features. Odoo 20, expected September 2026, should extend these capabilities (Odoo Experience 2025). The Belgian vendor, smaller than SAP or Microsoft, hasn’t published an AI Act roadmap yet.

The pattern is clear: large Anglo-Saxon vendors (SAP, Microsoft, Oracle) are ahead on compliance documentation. European and other vendors are actively integrating AI into their products but remain discreet about regulatory preparation.

AI Act compliance checklist for ERP user companies

The AI Act distinguishes two roles: the provider, the vendor who develops the AI system, and the deployer, the company that uses it. Both have obligations, but they’re different.

1. Inventory of AI functionalities enabled in your ERP

First step: precisely identify which AI modules are activated in your ERP installation. Many companies don’t know that certain AI functionalities are active by default.

Questions to ask your vendor or integrator:

  • Which modules use machine learning algorithms or generative AI?
  • Are these modules enabled by default or on opt-in?
  • What data feeds these models?
  • Are models trained on your data or pre-trained?

2. Risk level classification by module

For each identified AI functionality, determine its risk category:

AI ModuleProbable CategoryMain Obligation
Credit/solvency scoring (natural persons)High riskFull Annex III compliance
CV sorting / HR evaluationHigh riskFull Annex III compliance
Chatbot / conversational assistantLimited riskTransparency (Article 50)
Demand/stock forecastingMinimal riskNo specific obligation
Accounting anomaly detectionMinimal riskNo specific obligation
Report/email generationLimited riskAI marking (Article 50)

3. Documentation and traceability of automated decisions

For high-risk systems, the deploying company must:

  • Keep a register of decisions made or assisted by the AI system
  • Preserve logs generated by the system (duration appropriate to purpose)
  • Document human oversight measures implemented
  • Conduct fundamental rights impact assessment before deployment

4. Rights of concerned persons: explainability and recourse

The AI Act reinforces rights already established by GDPR. For high-risk systems:

  • Persons affected by an automated decision must be able to obtain clear explanation
  • A recourse mechanism must be accessible
  • Human oversight must be effective, not theoretical: a competent person must be able to cancel or modify the AI’s decision

This point is particularly sensitive for HR modules. A candidate rejected by an automatic CV sorting system has the right to know why and to challenge this decision.

What remains unclear and vigilance points for 2026-2027

Commission guidelines expected in H2 2026

Several gray areas remain in applying the AI Act to ERPs:

  • Fine classification: Article 6(3) provides that a system listed in Annex III can be exempted if it doesn’t pose significant risk. Precise exemption criteria are still under discussion.
  • Internal compliance: for most Annex III high-risk systems, providers conduct their own conformity assessment (Annex VI internal procedure, without notified body). The reliability of this self-assessment remains to be proven.
  • Transparency code of practice: the final text is expected for June 2026, leaving very little time for companies to adapt before August 2026 application.

The question of foundation models used by ERP vendors

An emerging issue: ERP vendors integrating third-party foundation models (GPT-4, Claude, Gemini) into their products. The AI Act imposes obligations on GPAI model providers (documentation, training data summary, copyright respect), but the responsibility chain between model provider, ERP vendor and user company isn’t yet fully clarified.

If your ERP vendor uses a third-party foundation model to power its AI assistant, ask them:

  • Which model is used and by which provider?
  • Has the model provider published its GPAI documentation?
  • Do your data transit through servers outside the EU?

Sanctions: a deterrent lever

The AI Act sanction regime is aligned with the GDPR model, with three tiers (Article 99):

  • Prohibited practices (Article 5): up to €35 million or 7% of global annual turnover
  • Other infractions: up to €15 million or 3% of global turnover
  • Incorrect information to authorities: up to €7.5 million or 1% of global turnover

SMEs and startups benefit from a softened regime: the amount retained is the lower between the percentage and the fixed amount, instead of the higher.

Act now rather than endure in August 2026

The AI Act isn’t distant text. Prohibitions have been in force since February 2025, GPAI obligations since August 2025, and the bulk of the system (high risk + transparency) applies in less than four months. Companies waiting for final Commission guidelines to start compliance take a calculable but real risk.

To deepen AI issues in ERPs, consult our top 5 ERP solutions with integrated AI in 2026. On related regulatory aspects, our ERP and GDPR guide covers data protection obligations, and our article on ERP systems cybersecurity addresses associated technical risks.