Challenges in implementing the EU AI Act for authorities.
Fragmented structures and complex responsibilities
Authorities operate with very diverse organizational structures, data landscapes, and decision-making processes. The EU AI Act, however, requires clear responsibilities, role definitions, and documented processes. Existing structures must be harmonized and AI governance coordinated across federal levels, often without unified guidelines or sufficient resources.
Lack of broad AI competencies
While individual departments are building expertise, many authorities lack a comprehensive understanding of how AI systems work, their limitations, and how to assess risks. Especially with generative AI, transparency, explainability, and safe usage require broad AI literacy, yet training and development programs are often not yet established.
Balancing legal certainty and innovation
Authorities must ensure regulatory compliance while actively promoting the use of AI. The fear of making mistakes or causing liability often leads to caution, even as the pressure grows to increase efficiency through digitization and automation and to mitigate staff shortages.
Holistic support for AI.
AI Readiness assessment for authorities
The starting point for a safe and strategic implementation of the EU AI Act. We conduct a structured assessment that evaluates your authority’s current maturity in dealing with AI – technologically, organizationally, and process-wise. You will then know exactly where you stand and what steps are necessary to use AI safely and future-proof.
- Analysis of existing AI strategy, governance structures, and competencies
- Assessment of data quality, infrastructure, and technical architecture
- Identification of compliance, roles, and process gaps in the context of the EU AI Act
- Development of a prioritized roadmap for safe and innovation-promoting AI use
AI roles and governance design
We support authorities in establishing robust governance that meets the regulatory requirements of the EU AI Act while enabling innovation. The result is clear, operational governance that ensures compliance and accelerates AI adoption:
- Definition of relevant roles and responsibilities (e.g., AI officers, risk managers, domain experts)
- Establishment of an AI governance focus group as a central steering body
- Development of clear rules, decision-making paths, and escalation mechanisms
- Operationalization of governance at the application level, providing employees with concrete guidance
Ethical & responsible AI – compliance and risk management
We help authorities align their AI systems with the EU AI Act and build trust among staff and citizens. Authorities receive a clear, legally secure framework for responsible AI use:
- AI risk analyses and classification according to the EU AI Act
- Establishment of processes for monitoring, documentation, and transparency
- Fairness checks and bias risk assessment
- Definition of technical and organizational measures for safe AI systems
Implementation & training
We accompany authorities in the practical introduction of AI applications and in enabling their staff to work confidently with AI, ensuring sustainable and compliant use:
- Implementation of identified measures, starting with quick wins
- Development and delivery of training sessions and workshops
- Establishment of AI champions within departments
- Ensuring user acceptance and responsible AI use in daily operations
EU AI Act Whitepaper
Important regulatory reference points for authorities.
August, 2nd 2026 is the critical date by which authorities must ensure full compliance with the AI Act requirements.
€35M
65%
of deployed AI systems will be fully risk-assessed in the first year, as authorities typically require 6–12 months to complete a comprehensive AI inventory.
Frequently Asked Questions.
What is the EU AI Act and what is its objective?
The EU AI Act is the European Union’s first comprehensive regulatory framework for artificial intelligence. Its objective is to ensure the safe, transparent, and trustworthy use of AI while simultaneously fostering innovation.
Does the EU AI Act also apply to public authorities?
Yes. The EU AI Act explicitly applies to public authorities, particularly where AI systems are used in sensitive areas such as public administration, the judiciary, security, social benefits, or education.
Which AI systems are considered high-risk in public administration?
High-risk AI systems include, among others, systems that support or automate decisions relating to access to benefits, funding, employment, education, or administrative and governmental procedures.
What obligations will public administrations face?
Public authorities will be required, among other things, to conduct risk assessments, ensure appropriate technical documentation, comply with transparency and reporting obligations, and establish suitable governance and control structures.
When will the EU AI Act come into force and what are the transition periods?
The EU AI Act will enter into force in phases. Many obligations will be subject to transitional periods, which public administrations should use to adapt their processes, systems, and organizational structures.
What does the EU AI Act mean for AI systems already in use?
Existing AI systems must be reviewed to determine whether they fall within the scope of the EU AI Act. Where applicable, adjustments, additional documentation, or organizational measures may be required.
What role does AI governance play in public administration?
AI governance ensures that AI systems are used in a legally compliant, ethically responsible, and transparent manner. It links legal requirements with organizational, technical, and cultural measures.
How can public administrations implement the EU AI Act pragmatically?
A structured approach is key: taking stock of existing AI applications, classifying risks, establishing clear responsibilities, training staff, and integrating AI governance into existing governance frameworks.
Does the EU AI Act hinder innovation in public administration?
No. The EU AI Act is not intended to stifle innovation, but to channel it in a clear and trustworthy direction. It provides legal certainty and promotes the responsible use of AI.
What sanctions apply in the event of non-compliance?
In the event of violations, significant fines may be imposed. For public authorities, however, reputational risks and loss of public trust are often the primary concerns.
























