- Veronika Beňová
In recent years, artificial intelligence has become not only a tool for innovation but also the subject of intense legal debate. As its impact on society, markets, and fundamental rights continues to grow, the European Union adopted the first comprehensive regulation in this field – the regulation on harmonised rules on artificial intelligence, better known as the AI Act. In this article, we will explore how the AI Act changes the rules for the development and use of AI technologies, which principles it introduces, and what it means in practice for different sectors.
What is the AI Act?
The AI Act is a European regulation, that entered into force in August 2024. Its objective is to establish uniform rules for the development, placing on the market, and use of artificial intelligence systems (AI systems) within the EU.
An AI system is a software or a device that autonomously performs tasks based on data processing, pattern recognition, and model-based outputs – activities that would otherwise require human decision-making. Typical examples include speech and image recognition, translation tools, chatbots, recommendation algorithms in e-commerce, medical diagnostic tools, and credit scoring systems.
The primary goal of the AI Act is to ensure that these systems:
- operate safely and transparently,
- do not discriminate and make fair decisions,
- comply with EU law and respect fundamental rights.
Which obligations already apply under the AI Act?
The main provisions of the regulation will become fully applicable from August 2026. However, some obligations already apply:
Users must be informed that they are interacting with AI. For example, if you deploy a chatbot on your website, it must be clearly labelled so users understand they are not communicating with a human.
Certain AI systems are outright prohibited. These include, in particular, real-time biometric identification of individuals in public spaces (with very limited statutory exceptions) and systems that evaluate children or other vulnerable groups.
Companies and developers should already begin preparing. It is advisable to document how your AI systems function, how they process data, and how risks are managed, so that by August 2026 all requirements can be met without last-minute pressure.
Who does the AI Act apply to?
The AI Act applies to all entities involved in the development, placing on the market, or use of AI systems within the EU – regardless of whether they are established inside or outside the EU. What matters is the impact on the EU market, not the company’s registered address.
The regulation does not concern only developers and providers who create AI systems. It also applies to users (referred to as deployers) who implement these systems in practice. Additionally, it covers distributors, importers, manufacturers of products incorporating AI functions, and authorised representatives acting on behalf of companies established outside the EU.
In simple terms: if you place an AI system on the EU market, supply it, or operate it within the EU, the AI Act applies to you.
Classification of AI systems based on risk
One of the fundamental principles of the AI Act is the risk-based approach. AI systems are divided into four categories depending on their potential impact on safety, health, fundamental rights, or the public interest.
Unacceptable risk
AI systems that present unacceptable risk are prohibited outright. These include real-time biometric identification in public spaces, social scoring systems, and AI that deliberately manipulates vulnerable groups such as children or persons with disabilities.
High risk
High-risk systems are those that can significantly affect an individual’s life or the functioning of society, particularly when used in decision-making with real-world consequences. These systems are subject to strict requirements. They must undergo conformity assessment procedures and be registered. Risk is always assessed in relation to the intended purpose, context, and manner of use. Examples include AI used in recruitment processes, medical diagnostics, allocation of public services, or creditworthiness assessment.
Limited risk
Limited-risk systems are those that do not independently decide on essential matters. In these cases, the regulation primarily focuses on transparency obligations. Users must clearly understand that they are interacting with AI rather than a human (for example, chatbots).
Minimal risk
Minimal-risk systems include most common AI tools such as recommendation algorithms in e-commerce or automated translation services. These systems are not subject to specific regulatory obligations under the AI Act.
AI in companies under the AI Act
Impact on organisations
If your AI system falls into the high-risk category, you must comply with specific obligations. The regulation is relatively detailed, particularly in the following areas:
- The system must be designed to prevent errors and minimise negative impacts.
- You must ensure traceability of the data used and of the decisions generated by the system.
- You must verify that the system functions reliably, robustly, and safely even under non-standard conditions.
- You must maintain oversight of development, operation, and changes throughout the system’s lifecycle
- You must establish human oversight, meaning AI decision-making must remain under human control, with the possibility of intervention where necessary.
High-risk AI systems placed on the market are also subject to CE marking requirements, similar to other regulated technologies.
The AI Act also takes small and medium-sized enterprises into account. It introduces regulatory sandboxes – controlled testing environments where companies can develop and test AI under the supervision of authorities before placing it on the market, without facing penalties for potential shortcomings.
How is the Czech Republic preparing for the AI Act?
In 2025, the Czech government approved a draft Artificial Intelligence Act to complement the AI Act at the national level. This law designates supervisory authorities, defines how the regulatory sandbox will operate, and sets out sanctions for non-compliance.
The national AI Act has not yet been adopted by Parliament. It is currently progressing through the legislative process, and its final form may still change. The regulatory sandbox has been approved as a government project, but detailed participation and operational rules have not yet been firmly established in legislation. Authorities are currently preparing methodological guidance and clarifying specific conditions.
Who has which role?
MPO
ČTÚ
ČNB
ÚOOÚ
ÚNMZ
ČAS
Key takeaways
The AI Act represents a major step toward regulating artificial intelligence in Europe. It establishes rules for the development and use of AI systems based on risk level and provides greater legal certainty not only for providers and developers but also for users.
Some obligations will increase administrative and financial demands. On the other hand, they create clearer boundaries, strengthen safety and transparency, and enhance the protection of fundamental rights. For companies, this primarily means that artificial intelligence can no longer be treated solely as a technological topic operating independently, but as a matter of risk management and corporate responsibility.