AI Act – Everything about the new rules for Artificial Intelligence in the EU

robot-represents-european-union-future-technology-artificial-intelligence-embraces-european-flag

In recent years, artificial intelligence has become not only a tool for innovation but also the subject of intense legal debate. As its impact on society, markets, and fundamental rights continues to grow, the European Union adopted the first comprehensive regulation in this field – the regulation on harmonised rules on artificial intelligence, better known as the AI Act. In this article, we will explore how the AI Act changes the rules for the development and use of AI technologies, which principles it introduces, and what it means in practice for different sectors.

What is the AI Act?

The AI Act is a European regulation, that entered into force in August 2024. Its objective is to establish uniform rules for the development, placing on the market, and use of artificial intelligence systems (AI systems) within the EU.

An AI system is a software or a device that autonomously performs tasks based on data processing, pattern recognition, and model-based outputs – activities that would otherwise require human decision-making. Typical examples include speech and image recognition, translation tools, chatbots, recommendation algorithms in e-commerce, medical diagnostic tools, and credit scoring systems.

The primary goal of the AI Act is to ensure that these systems:

  • operate safely and transparently,
  • do not discriminate and make fair decisions,
  • comply with EU law and respect fundamental rights.

Which obligations already apply under the AI Act?

The main provisions of the regulation will become fully applicable from August 2026. However, some obligations already apply:

Users must be informed that they are interacting with AI. For example, if you deploy a chatbot on your website, it must be clearly labelled so users understand they are not communicating with a human.

Certain AI systems are outright prohibited. These include, in particular, real-time biometric identification of individuals in public spaces (with very limited statutory exceptions) and systems that evaluate children or other vulnerable groups.

Companies and developers should already begin preparing. It is advisable to document how your AI systems function, how they process data, and how risks are managed, so that by August 2026 all requirements can be met without last-minute pressure.

Who does the AI Act apply to?

The AI Act applies to all entities involved in the development, placing on the market, or use of AI systems within the EU – regardless of whether they are established inside or outside the EU. What matters is the impact on the EU market, not the company’s registered address.

The regulation does not concern only developers and providers who create AI systems. It also applies to users (referred to as deployers) who implement these systems in practice. Additionally, it covers distributors, importers, manufacturers of products incorporating AI functions, and authorised representatives acting on behalf of companies established outside the EU.

In simple terms: if you place an AI system on the EU market, supply it, or operate it within the EU, the AI Act applies to you.

Classification of AI systems based on risk

One of the fundamental principles of the AI Act is the risk-based approach. AI systems are divided into four categories depending on their potential impact on safety, health, fundamental rights, or the public interest.

Unacceptable risk

AI systems that present unacceptable risk are prohibited outright. These include real-time biometric identification in public spaces, social scoring systems, and AI that deliberately manipulates vulnerable groups such as children or persons with disabilities.

High risk

High-risk systems are those that can significantly affect an individual’s life or the functioning of society, particularly when used in decision-making with real-world consequences. These systems are subject to strict requirements. They must undergo conformity assessment procedures and be registered. Risk is always assessed in relation to the intended purpose, context, and manner of use. Examples include AI used in recruitment processes, medical diagnostics, allocation of public services, or creditworthiness assessment.

Limited risk

Limited-risk systems are those that do not independently decide on essential matters. In these cases, the regulation primarily focuses on transparency obligations. Users must clearly understand that they are interacting with AI rather than a human (for example, chatbots).

Minimal risk

Minimal-risk systems include most common AI tools such as recommendation algorithms in e-commerce or automated translation services. These systems are not subject to specific regulatory obligations under the AI Act.

AI in companies under the AI Act

At Cybrela Academy, we have prepared an online course that guides you through the world of the AI Act step by step. You will learn exactly what your company must comply with, how to address AI-related clauses in supplier contracts, and how to set internal rules so that AI becomes a benefit rather than a legal risk. Start learning today and gain clarity before the rules become fully applicable.
ONLINE KURZ

Impact on organisations

If your AI system falls into the high-risk category, you must comply with specific obligations. The regulation is relatively detailed, particularly in the following areas:

High-risk AI systems placed on the market are also subject to CE marking requirements, similar to other regulated technologies.

The AI Act also takes small and medium-sized enterprises into account. It introduces regulatory sandboxes – controlled testing environments where companies can develop and test AI under the supervision of authorities before placing it on the market, without facing penalties for potential shortcomings.

How is the Czech Republic preparing for the AI Act?

In 2025, the Czech government approved a draft Artificial Intelligence Act to complement the AI Act at the national level. This law designates supervisory authorities, defines how the regulatory sandbox will operate, and sets out sanctions for non-compliance.

The national AI Act has not yet been adopted by Parliament. It is currently progressing through the legislative process, and its final form may still change. The regulatory sandbox has been approved as a government project, but detailed participation and operational rules have not yet been firmly established in legislation. Authorities are currently preparing methodological guidance and clarifying specific conditions.

Who has which role?

MPO

Ministry of Industry and Trade
National coordinator of the AI Act. Oversees the overall agenda and ensures coordination among authorities.

ČTÚ

Czech Telecommunication Office
Main supervisory authority for most AI systems. Also serves as a contact point for reporting and international cooperation.

ČNB

Czech National Bank
Supervises the use of AI in the financial sector.

ÚOOÚ

Office for Personal Data Protection
Monitors compliance with personal data protection and fundamental rights when AI is used.

ÚNMZ

Office for Technical Standardisation, Metrology and State Testing
Designates notified bodies for AI conformity assessment and supervises their activities.

ČAS

Czech Agency for Standardisation
Operates the regulatory sandbox, provides methodological support, and assists in implementing European AI standards.

Key takeaways

The AI Act represents a major step toward regulating artificial intelligence in Europe. It establishes rules for the development and use of AI systems based on risk level and provides greater legal certainty not only for providers and developers but also for users.

Some obligations will increase administrative and financial demands. On the other hand, they create clearer boundaries, strengthen safety and transparency, and enhance the protection of fundamental rights. For companies, this primarily means that artificial intelligence can no longer be treated solely as a technological topic operating independently, but as a matter of risk management and corporate responsibility.

We can help you with the AI Act

Not sure how the AI Act affects your processes? We will assess your specific use of AI and advise you on the right course of action.

More articles

What does the AI Act bring? A clear guide to the new EU regulation on artificial intelligence. Discover what rules it introduces for companies and what changes in 2026.
Do you manufacture or provide smart devices or cloud services? Read our overview of key obligations and practical guidance on how to comply with the EU Data Act.
What security measures does the Cybersecurity Act require? An overview and explanation of measures in the higher and lower regimes

Newsletter

Do you want to ensure your company is protected from cyber threats while also complying with applicable legislation? Sign up for our newsletter and receive practical advice from our legal consultants.

By clicking subscribe you consent to the processing of your personal data for marketing purposes.