The AI Act's enforcement gap: what Poland's new regulator reveals about Europe's challenge

Jan Króliński (MPP 2024) examines how Poland’s decision to create a brand-new, centralised authority to enforce the EU AI Act sets it apart from almost every other Member State, and what that choice reveals about the tensions governments face when regulating artificial intelligence.

Estimated reading time: 5 Minutes
Hands with devices overlaid with abstract technology image

The EU’s Artificial Intelligence Act has been in force since August 2024, and EU lawmakers are already moving to amend it. 

This week, the European Parliament’s committees endorsed the Digital Omnibus on AI, a set of amendments that would extend key compliance deadlines and reshape the enforcement architecture. A plenary vote is expected on 26 March, and the Council agreed its negotiating mandate on 13 March. But while Brussels debates the rules, most EU countries are still working out how to actually enforce them.

The AI Act leaves Member States free to design their own national supervisory structures, and the choices they are making vary dramatically. Most are spreading oversight across a patchwork of existing regulators, e.g., France plans to involve 14 separate bodies.

Poland has taken the opposite approach. It is one of only two EU countries – alongside Lithuania – to designate a single entity as its sole market surveillance authority for AI, and the only one building an entirely new institution to serve as its sole market surveillance authority. That institution is the Commission for the Development and Safety of Artificial Intelligence, known by its Polish acronym KRiBSI.

A European outlier

As of early 2026, only nine of the 27 Member States have officially designated their national AI authorities, with a further ten in the process of doing so. Among the 19 countries that have designated or proposed their governance models, the overwhelming pattern is dispersion: parcelling out market surveillance duties among whichever existing regulators already oversee the sectors where high-risk AI is deployed, i.e. data protection offices, financial supervisors, telecoms regulators, health agencies, and so on.

Poland’s choice to create a single, new body stands in sharp contrast. The rationale, set out in the government’s Regulatory Impact Assessment, rests on two arguments. First, AI specialists are scarce, and requiring every sectoral regulator to independently build AI oversight capacity would be wastefully expensive. Second, a dispersed model would trigger harmful competition between government agencies for that same narrow pool of experts. Centralisation, in this framing, is not an ideological preference but a resource-management strategy – one that echoes arguments made in the financial supervision literature about consolidating specialist staff where the public sector cannot compete with private-sector salaries.

To compensate for the risk that a horizontal authority might lack sector-specific knowledge, the Polish legislator gave KRiBSI a collegiate structure. Its membership includes representatives of the competition authority, the financial supervisor, the broadcasting council, and the telecoms regulator, embedding sectoral expertise directly within the decision-making body.

Following budgetary pressure, the February 2026 draft confirms KRiBSI as the designated authority for AI Act enforcement, but nests its operational support, the dedicated unit responsible for day-to-day enforcement, within the Ministry of Digital Affairs. The staff, budget, and infrastructure on which the Commission depends are thus administered by the very ministry whose policy portfolio it is supposed to independently oversee. Under earlier drafts, the Commission was to be supported by a standalone Bureau with its own legal personality, a structure that was abandoned following fiscal objections from the Ministry of Finance.

Novel instruments for shared problems

Beyond its supervisory structure, the Polish draft legislation introduces two instruments that address challenges shared across the EU. The first is a mechanism for ‘individual opinions’: binding opinions that allow a company to formally request a determination from KRiBSI on how the regulation applies to its specific product or service. This offers upfront legal certainty of the kind that businesses across Europe are clamouring for, and could work as a complement to the regulatory sandboxes established under the AI Act. A company could first seek a binding opinion to clarify whether its system counts as high-risk, then use the sandbox to test compliance.

The second is the Social Council for AI, an advisory body of 9 to 15 members drawn from academia, civil society, business chambers, and trade unions, all required to have expertise in areas such as AI, cybersecurity, or human rights. The Council is designed to bridge the public sector's expertise gap, a risk the government itself identified in its impact assessment. Members serve two-year terms, deliberately short to keep pace with rapid technological change. The Council’s opinions are not binding, and the positions are unpaid, which may limit its influence, but the structure provides a foundation that can be strengthened as the enforcement regime matures.

What Poland’s choices tell us

Poland’s experience illuminates a tension that is unlikely to be unique to any one country: the pull between institutional ambition and fiscal constraint. Every Member State building its AI enforcement machinery faces the same uncomfortable questions. How much is robust, independent oversight actually worth? Can statutory independence guarantees compensate for administrative dependence? And when specialists are scarce, is it better to concentrate them in one place or embed them across many?

There are no clean answers. Poland’s centralised model offers a single point of contact for businesses and for EU-level coordination through the European AI Board, and it avoids the fragmentation that can slow down enforcement when responsibilities are scattered across a dozen agencies. But it places an extraordinary burden on a single body to develop sufficient understanding of every sector, from healthcare and finance to education and law enforcement, in which high-risk AI systems are deployed. That is a gamble whose outcome will only become clear once the enforcement deadlines arrive.

The draft law has yet to reach Parliament, and amendments could still alter the design. But with more than two-thirds of EU Member States yet to finalise their own governance models, Poland’s early and distinctive bet to centralise, build new, and supplement with advisory innovation deserves close attention as a live experiment in what it takes to turn AI regulation from paper into practice.

This research began as a project during the author’s MPP summer placement and has since developed into a broader study of how EU countries are building their AI governance structures. A longer version of this analysis was published as a policy brief by interface, a Berlin-based policy research institute, and is available here. The analysis is based on the most recent draft legislation available as of late February 2026, before parliamentary consideration.