A major French retailer using AI-driven pricing and customer profiling needed EU AI Act compliance before August 2026. We conducted a full risk classification audit and delivered a comprehensive remediation roadmap.
Our client, one of France’s largest multi-format retailers with over 4,000 stores and a growing e-commerce presence, had deployed 14 distinct AI systems across its operations. These spanned dynamic pricing algorithms, personalised product recommendation engines, customer segmentation models, fraud detection pipelines, workforce scheduling tools, and an internal employee performance monitoring system. Each had been developed or procured independently by different business units over the preceding three years.
There was no centralised AI inventory. No single team had visibility into which AI systems existed, what data they consumed, what decisions they influenced, or what level of human oversight was in place. The board had grown increasingly concerned after the EU AI Act was formally adopted, and the August 2026 compliance deadline was approaching rapidly. Cross-departmental ownership of AI systems was fragmented—pricing sat under commercial, recommendations under digital, fraud detection under finance, and employee monitoring under HR—with no shared governance framework connecting them.
The stakes were significant. Under the EU AI Act, non-compliance penalties can reach up to €35 million or 7% of global annual turnover, whichever is higher. Beyond the financial exposure, the reputational risk of being among the first major retailers publicly sanctioned under the new regulation was unacceptable to the executive team. They needed clarity on their risk posture, and they needed it fast.
The Chief Digital Officer brought Insightrix in with a focused brief: catalogue every AI system in the organisation, classify each one under the EU AI Act risk framework, identify the most critical compliance gaps, and deliver an actionable remediation roadmap—all within two weeks.
“We had AI systems making pricing decisions, profiling customers, and even monitoring employee productivity—but nobody could tell me which of those were actually high-risk under the new regulation. We needed someone who could cut through the noise and give us a clear picture of what was truly at stake.”
Chief Digital Officer Major French Retail Group, Paris
We began by cataloguing all 14 AI systems across every business unit. For each system, we documented the underlying model type, training data sources, input and output data flows, the nature and scope of automated decisions, downstream impact on individuals, and the level of human oversight currently in place. We conducted structured interviews with system owners in commercial, digital, finance, and HR to understand not just the technical architecture but the business context—who relied on each system’s outputs, how those outputs influenced real-world decisions, and where human review existed (or didn’t). The result was the company’s first comprehensive AI register.
Using the inventory as our foundation, we classified each of the 14 systems according to the EU AI Act’s four-tier risk framework: prohibited, high-risk, limited risk, and minimal risk. Three systems were classified as high-risk under the regulation. The dynamic pricing engine fell under high-risk due to its potential to affect consumer purchasing decisions at scale without adequate transparency. The credit scoring model used in the client’s financial services arm qualified as high-risk under Annex III provisions governing creditworthiness assessment. The employee monitoring system triggered high-risk classification due to its use in employment-related decision-making. Five systems were classified as limited risk and six as minimal risk, with none falling into the prohibited category.
For each of the three high-risk systems, we conducted a detailed gap analysis against the specific requirements outlined in Articles 9 through 15 of the EU AI Act. This covered five key compliance domains: risk management systems (Article 9), data governance and data quality protocols (Article 10), technical documentation and record-keeping (Article 11 and 12), transparency and information provision to users (Article 13), and human oversight mechanisms (Article 14). We also assessed accuracy, robustness, and cybersecurity requirements under Article 15. The analysis revealed a 47% overall compliance gap across the three high-risk systems, with the most significant deficiencies in transparency obligations and human oversight provisions.
We designed a fit-for-purpose AI governance structure tailored to the client’s organisational reality. This included the creation of an AI Officer role reporting directly to the CDO, the establishment of a cross-functional AI Ethics Board with representatives from legal, compliance, technology, and business operations, and a Risk Committee mandate to include AI-specific risk assessment in its quarterly reviews. We drafted a Responsible AI Policy covering the full AI lifecycle from procurement and development through deployment, monitoring, and decommissioning. We defined escalation procedures for high-risk system changes, incident reporting protocols, and ongoing monitoring requirements. The complete governance framework, remediation roadmap with 12 prioritised actions, and executive briefing were delivered on day ten.
Every one of the 14 AI systems across pricing, recommendations, segmentation, fraud detection, and workforce management was formally catalogued and classified under the EU AI Act’s four-tier risk framework for the first time.
Dynamic pricing, credit scoring, and employee monitoring were classified as high-risk, triggering mandatory compliance requirements under Articles 9–15 including technical documentation, human oversight, and conformity assessment obligations.
The gap analysis quantified the compliance deficit across all three high-risk systems. The 12-action remediation roadmap provided a clear, prioritised path to close every gap before the August 2026 enforcement deadline.
From the initial stakeholder interviews to executive briefing and final roadmap handover, the entire engagement was completed in ten working days—giving the client five months of runway before the compliance deadline.
Conducted structured interviews with system owners across commercial, digital, finance, and HR departments. Catalogued all 14 AI systems, documented data flows, decision-making impact, and current human oversight levels. Produced the organisation’s first comprehensive AI register.
Classified each system under the EU AI Act’s four-tier risk framework. Identified three high-risk systems requiring mandatory compliance measures. Began detailed gap analysis against Articles 9–15 requirements for each high-risk deployment.
Designed the AI governance structure including AI Officer role, Ethics Board composition, and Risk Committee mandate. Drafted the Responsible AI Policy covering the full system lifecycle. Defined monitoring procedures and incident reporting protocols.
Presented findings to the executive committee and board risk committee. Delivered the 12-action prioritised remediation roadmap, complete AI register, governance framework documentation, and implementation timeline aligned to the August 2026 enforcement deadline.
Book a free AI readiness call. We'll discuss your challenges and outline a clear path forward.