Brussels — The European Times – European Ombudswoman Teresa Anjinho has opened an inquiry into how the European Commission manages transparency, inclusiveness, and accountability in the development of harmonised standards for artificial intelligence (AI) — a cornerstone of the forthcoming EU AI Act implementation.
The investigation, formally registered as Case 1974/2025/MIK, follows a complaint by a civil society organisation that raised concerns over opaque practices within European standardisation bodies. The complainant argued that the groups drafting the standards are not required to disclose who participates, nor do they publish meeting minutes, leaving the process largely shielded from public scrutiny.
Questions for the Commission
As a first step, the Ombudswoman has asked the Commission to clarify:
- the composition and selection criteria of the expert groups;
- the transparency and participation rules applied by the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC);
- how the Commission monitors and evaluates the standardisation process; and
- what mechanisms exist to review and ensure balance among stakeholders.
Her office also requested internal documents relating to the Commission’s oversight of these procedures.
Why the issue matters
Harmonised standards provide a technical path to legal compliance in EU legislation. Once recognised by the Commission, they allow companies to demonstrate that their products, services, or algorithms meet the bloc’s regulatory requirements — effectively becoming gateways to the EU market.
The Commission’s May 2023 standardisation request tasked CEN and CENELEC with translating the AI Act’s principles into workable norms, ensuring that the resulting standards minimise risks to safety and fundamental rights. However, these private bodies are now handling high-impact ethical and social dimensions usually reserved for policymakers, raising questions about democratic oversight.
A test for EU accountability in AI
The Ombudswoman’s inquiry reflects growing concern that technical standardisation — often seen as neutral — could influence how AI systems interact with human rights and public interest across Europe. Her findings could determine whether the EU’s promise of “trustworthy AI” is matched by transparent and inclusive governance.