Artificial intelligence
EU AI Act: How Europe regulates the use of artificial intelligence
- Summary: The EU AI Act, in force since August 2024, is the world's first comprehensive law regulating artificial intelligence. It is based on a risk-based approach with four categories: unacceptable, high, limited and minimal risk. Unacceptable risk systems, such as social scoring, manipulative AI or biometric mass surveillance, shall be prohibited. However, the scheme affects almost all companies that develop, operate or use AI, regardless of their size. Violations can be punished with fines of up to €30 million or 6 % worldwide annual sales are penalized. Companies should therefore review their AI systems, implement risk-based measures, establish transparency and apply for certification, if applicable, by August 2026 at the latest.
Developments in artificial intelligence (AI) have made unprecedented progress over the last five years. AI is integrated into almost every digital system, interacts with us in our language or gives answers to almost all questions that people ask themselves in seconds.
From personalized health recommendations to autonomous vehicles, AI has changed the world for the long term. A particularly groundbreaking advance is shown in the development of new drugs. By using big data and AI, huge amounts of data from clinical trials and genetic information can be analyzed almost in real time. Promising active ingredients can be identified faster with AI and developed more cost-effectively.
But this rapid AI revolution not only holds potential and opportunities, but also risks. Discrimination by AI systems, unethical use and lack of transparency are some of the key challenges that arise when using AI. In addition, the developers of artificial intelligence see the danger of AI taking control. Sam Altman, founder of Open-AI, and other leading developers, joined together in the organization Center for AI Safety, warn that the risk of destruction by AI must be taken as seriously as ‘other risks of societal scale, such as pandemics and nuclear war’.
With the EU AI Act, the world's first comprehensive set of rules for AI, the European Union (EU) wants to set clear limits to both minimize risks and promote the development of safe and transparent AI applications. This article explains what the EU AI Act is and what regulations it makes within the European Union. It explores what the backgrounds of the introduction are and into which categories artificial intelligence is classified. Among other things, you will learn how German and European companies are affected by the regulations, which AI systems are regulated and which penalties are threatened if disregarded.
Table of contents
What is the EU AI Act and what does it regulate?
The EU AI Act was adopted by the European Council on 21 May 2024. In full version is the Ordinance entered into force in the European Union on 1 August 2024. The regulation is fully applicable after a transitional period of 24 months from August 2026. Some rules are already binding.
The AI Act establishes a uniform legal framework for the development and use of artificial intelligence in the EU. The European-wide regulation adopts a risk-based approach, which provides for particularly strict requirements for high-risk AI systems and prohibitions for AI with unacceptable risk. Transparency obligations apply to low-risk AI systems. The declared objective of the EU AI Act is to protect the safety, health and fundamental rights of citizens, strengthen trust in AI, prevent abuse and enable fair competition and further digital innovation.
The AI Act is the world's first comprehensive law regulating artificial intelligence. The law applies to companies of all sizes, providers, operators, importers and also users of AI systems. The purely private, non-commercial use of AI is not regulated by the EU AI Act.
What led to the introduction of the EU AI Act
The introduction of the EU AI Act became necessary due to the massive use of AI in sensitive areas such as healthcare, justice and critical infrastructure. The EU’s approach to the EU AI Act is similar to that of the General Data Protection Regulation (GDPR). The EU AI Act aims to protect the citizens of the European Union and set global standards.
The objectives of the EU AI Act can be seen in the example of social scoring, which is common in China. The social scoring system in China evaluates the behavior of citizens, companies and organizations using digitally collected data. Video surveillance, the evaluation of online activities and entries in state registers result in a combined score for a citizen.
Plus points are awarded for desirable, system-compliant behavior. Among other things, Chinese citizens earn points through compliance with the law, social commitment such as volunteering or economic reliability. A negative attitude towards the government, breaking the rules or poor performance in school and work lead to deduction of points. Consequence of a low score can be sanctions such as travel bans and restrictions in everyday life.
The Chinese Communist government wants to improve the trustworthiness and social behavior of its citizens through control and control. However, critics clearly see social scoring as a tool for comprehensive monitoring and disciplining of the population. Such use of AI is contrary to the EU’s fundamental values, as it severely restricts citizens’ privacy and fundamental rights.
The EU AI Act sets clear limits to such and similar AI-driven systems in order to prevent such scenarios in Europe. Strict rules and transparency requirements aim to ensure that AI systems are used ethically and legally.
How does the EU AI Act categorise AI systems?
The EU AI Act divides AI systems into four risk categories. These categories are based on their potential impact on people and society:
Unacceptable risk: This category includes applications that are considered dangerous or ethically unacceptable. Examples include social scoring systems or manipulative systems, such as AI-driven toys, that encourage children to engage in dangerous behavior. AI toys can respond to children individually through voice control, cameras and sensors and influence them in a targeted manner. Such applications are completely prohibited as they endanger fundamental rights and freedoms.
High risk: AI systems with a high risk operate in sensitive areas. These include, but are not limited to, healthcare, the defence industry, law enforcement, public administration and finance. Digital systems and Software these sectors have a potentially significant impact on the lives of people in the EU. If AI is used in these areas, it must comply with the strict rules of the EU AI Act. These include requirements for transparency, robustness and human oversight.
Limited risk: Under the EU AI Act, there is a limited risk of AI systems interacting with users. A typical example in this category are chat bots. Such applications must be transparent. This means, above all, that users know that they are communicating with an AI and not with a human. This can happen, for example, through the message: “I am an AI system.”
Minimum risk: The majority of today's known AI applications fall into the category of minimal risk. AI image editing software, AI text generators or AI-based spell checking are largely exempt from regulation. Nevertheless, their use should follow the general principles of security and transparency.
Labelling obligation for AI content
AI-generated images, videos and audios must be flagged if they represent so-called deepfakes. AI-generated texts are also subject to labelling if they are of public interest. This applies, for example, to news.
With the four-part risk classification, the EU AI Act aims to ensure that AI systems are used responsibly. The regulation focuses on maintaining a balance between security, transparency and technological development.
Who is affected by the regulations?
The regulation mainly affects companies that develop and distribute AI systems. Since all corporations and companies must also comply with the EU AI Act, which integrates AI into their work processes, the regulation applies to almost every company in the European Union.
Providers of ‘general purpose AI’ models, such as Open AI’s ChatGPT or Google’s Gemini AI, are subject to additional requirements. They are obliged to disclose training data from their AI systems and to comply with extensive transparency requirements.
What is prohibited from when?
The following transitional periods apply to the different risk categories in the EU:
EU AI Act officially enters into force with 2-year transition period | 01.08.2024 |
Prohibition of the following AI practices classified as unacceptable risk (see above). These include manipulative techniques, exploiting vulnerable people, social scoring, real-time biometric remote identification, biometric categorization, emotion recognition in the workplace without consent. | 02.02.2025 |
Transparency requirements for general AI systems. Governance rules apply. | 01.08.2025 |
Obligations for high-risk AI systems are implemented. | 01.08.2026 |
General transition period of 2 years ends. | 01.08.2026 |
High-risk systems with a safety component (Article 6) are also covered by the EU AI Act. These include autonomous driving systems, AI-controlled elevators and biometric identification systems | 01.08.2027 |
Which AI systems are excluded from the rules?
Not all AI systems fall under the strict requirements of EU regulations. The AI Act defines exceptions for certain applications. The exceptional projects must be developed and implemented under clearly defined conditions and in the public interest. These include, for example:
AI systems for scientific research: Such artificial intelligence systems are used to generate new innovations and promote technological advances. An immediate, commercial interest must not be in the foreground.
Prototypes in regulated test environments (‘Regulatory Sandboxes’): These specially set up test areas enable companies to develop, test and validate new AI technologies under controlled conditions. Control of the systems is at all times with the developing engineers. This reduces risks and at the same time provides room for innovation.
The EU is actively promoting these so-called regulatory sandboxes to support the development of AI technologies. Each EU Member State is legally obliged to set up at least one national AI sandbox by 2 August 2026 at the latest. The European Commission supports Member States through technical advice, tools and initiatives such as the EUSAiRproject.
To drive AI innovation, the European Commission also announced the mobilisation of €200 billion in investments in AI in 2025. Of this, €150 billion will be provided by 70 private companies (EU AI Champions Initiative) and EUR 50 billion through the EU.
Example of a Regulatory Sandbox
One pilotproject in Spain, brings together authorities and companies to develop best practice guidelines for the implementation of the AI Regulation. The results will be made available to all EU member states to facilitate the introduction of AI regulation.
What punishments threaten to be meted out in the event of contempt?
The EU AI Act is similar to the DSGVO severe penalties for breaches of the regulation. Above all, these should act as a deterrent.
Fines of up to €30 million, or 6% of a company's worldwide annual turnover, are at risk, whichever is higher.
In the case of repeated infringements, operators and developers of AI systems risk complete exclusion from the EU market until the identified shortcomings are remedied. This clear threat of punishment clarifies that compliance with the rules is a top priority for the EU Commission.
How can companies ensure that their systems are compliant?
Companies should implement the following three measures by the deadline of 1 August 2026:
Review of AI tools: Take advantage of a Compliance-Checker to determine which risk class your system is assigned to. The Compliance Checker on artificialintelligenceact.eu is an EU interactive tool that automatically determines whether an AI system is covered by the law, how it is risk-classified and what specific requirements need to be met.
Create transparency: If you work with high-risk systems, disclose all training data and usage instructions.
Apply for certifications: Collaborate specifically with notified bodies for conformity assessment when using high-risk AI systems. Notified bodies include private companies and governmental bodies that have been notified and audited, as well as national market surveillance authorities.
Bring your employees' skills to the next level!
- With our intuitive platform, you train your employees interactively, effectively and measurably.
- Easy operation & clear structure
- Own academy in your corporate design