Governing Artificial Intelligence in Public Administration
Public administration faces a dual challenge. On the one hand, pressure is mounting to digitalize services and make them more citizen-centric. On the other hand, risk aversion still characterizes many decisions—something that also plays a role when dealing with AI applications.
With the Regulation of the European Parliament and of the Council laying down harmonized rules for artificial intelligence—commonly referred to as the AI Act—the European Union has created a groundbreaking law that, for the first time, comprehensively regulates the use of safe AI systems and addresses the challenges of public administration.
This globally unique legal framework pursues several key objectives: to protect citizens’ fundamental rights, to foster trust in AI applications, and to strengthen the responsible use of AI—without slowing down Europe’s capacity for innovation.
Classification of AI systems into risk categories
A central element of the Act is the classification of AI systems into four risk categories, based on their potential risk to society and individuals. Regulation ranges from minimal oversight (risk categories 1 & 2) to a complete ban (risk category 4).
Risk categories 1 & 2 cover AI solutions that must meet basic transparency obligations. Many applications fall into this group, such as citizen-facing chatbots that only access publicly available data. These systems must clearly identify themselves as AI in user interactions.
A reliable legal framework provides public administration with security
For the German public sector, the AI Act offers significant potential: the level of digitalization remains comparatively low, and AI is still described as “a niche topic in public administration”. The AI Act provides legal certainty and a reliable framework for the introduction and use of this technology.
The Act should therefore not be misunderstood as an administrative burden. Instead, it offers essential guidance for the responsible procurement and use of AI applications, while reducing hesitation by providing clear rules for practical implementation.
This legal certainty could prove crucial in ensuring not only that citizens’ demand for more user-friendly services is met, but also that administrations can maintain their operational capacity in the face of demographic change. If AI solutions gain timely acceptance within public administration, their use will increase.
Widespread adoption of AI would allow administrations to work more efficiently with fewer employees, as individual workloads decrease and capacities are freed up for other tasks. In this way, the AI Act enables administrations to remain functional and citizen-oriented despite shrinking staff numbers, while also driving their digital transformation forward. Ultimately, it creates sustainable added value for the public sector and tangible improvements for citizens.
Implementation of the AI Act
The urgency of engaging with the AI Act arises from the fact that several provisions will enter into force within the next few months. On August 2, 2025, regulations concerning the governance of general-purpose AI models — such as ChatGPT — took effect, alongside the penalties foreseen in the Act, which also apply to public bodies.
Currently, the AI Act stipulates primarily financial penalties for violations, targeting both private and public organizations. For public bodies, including EU institutions, agencies, and entities, infringements involving prohibited AI practices could result in fines reaching into the seven-digit range. Other violations of obligations or requirements may lead to six-digit fines.
To avoid such sanctions, it is now crucial for public administration to begin implementing the Act’s requirements and establish robust AI governance—both organizationally and technically. Administrations need to achieve compliance standards, develop cooperation models with supervisory authorities, and introduce risk-based auditing and transparent operating procedures.
These steps are essential to ensure the lawful and trustworthy use of AI in line with the AI Act.
Therefore, public administrations should consider establishing a governance framework when implementing AI solutions in order to ensure compliance with the AI Act. We recommend the following three-step approach:
- Inventory of AI applications: The first step is to document all current and planned AI applications. This process provides a comprehensive overview of all use cases, the stakeholders involved, and the sectors in which the systems are deployed.
- Risk-based clustering and definition of responsibilities: In the second step, all applications are clustered according to their risks and specific requirements. At the same time, the roles and responsibilities of all stakeholders are defined.
- Development of processes, roles, and responsibilities: Finally, based on the inventory and categorization, processes, roles, and responsibilities are established for each AI system to ensure compliance with the AI Act.
The outcome of this process is a governance framework that minimizes an organization’s reputational and liability risks by maintaining oversight and control of all AI systems.
Key facts about the AI Act
The AI Act also applies to the public sector. Public administrations, just like private companies, must establish AI governance structures to ensure ongoing compliance. The process of creating such a structure follows a three-step approach, which will differ depending on the type and number of AI applications in each organization.
Some AI applications are already prohibited, and further provisions of the Act will be phased in over the coming years.
Non-compliance carries heavy penalties—including for public administrations. Significant fines may be imposed in cases of violations.












