The Artificial Intelligence Act (AI Act), which constitutes the first comprehensive regulation by the European Union (EU) in the field of artificial intelligence, was adopted on August 1, 2024, and as of February 2, 2025, the first five articles have begun to be implemented. These include the subject matter and objectives, scope, definitions, AI literacy, and prohibited AI applications. In particular, Article 4 (AI Literacy) and Article 5 (Prohibited AI Practices) are the areas that entities subject to the regulation should pay the most attention to. On February 4, 2025, the European Commission also published a 135-page guideline on prohibited AI applications.[1] The Commission emphasized that the implementation of Article 5 must take into account the specifics of each case and that each situation requires individual evaluation.
Moreover, on February 4, 2025, the European Commission issued a 135-page guideline regarding prohibited AI applications.[2] This guideline offers a framework for interpreting Article 5 and underscores the need to consider the unique aspects of each case. On February 6, 2025, the Commission also released a guideline on the definition of AI systems.[3] This document serves as a guide on how to interpret the definition of “AI system” provided in Article 3 of the AI Act.
AI Literacy
One of the most notable features of the regulation is Article 4, which makes AI literacy mandatory. This article requires providers[4] and deployers[5] to inform individuals who work with AI systems. This means that not only software engineers or technical teams, but anyone who interacts with AI must be educated accordingly. Companies must ensure that employees who use or develop AI technologies understand the potential impacts and risks of these systems. Appropriate training programs should be developed to help employees comprehend how AI works, where it is used, and the ethical and legal dimensions of these technologies. However, it should be noted that not all employees are expected to receive the same level of training. AI literacy training should be tailored according to the employee’s role and organizational context. For example, engineers and developers should receive training on the technical risks and ethical dimensions of AI systems.
Organizational leaders, on the other hand, should understand how to interpret outputs generated by AI systems and how to integrate those outputs into business processes.
Although a full compliance program cannot yet be fully implemented under the regulation, AI literacy remains crucial for companies regardless of regulatory obligations. In this context, it is advisable to draw a compliance roadmap in consideration of the AI Act requirements and the EU AI Office’s guidance. To enhance AI literacy, companies can identify areas where AI is used within their organizational structure and develop training programs tailored to those areas. When designing training, companies should assess employees’ current knowledge levels, analyze AI-related risks, and consider compliance requirements. In this regard, reviewing the sample documents on AI literacy published by the AI Office would be beneficial.[6]
Definition of AI Systems
On February 6, 2025, the European Commission released a guideline on the definition of AI systems.[7] This document clarifies how the definition of “AI system” under Article 3 of the AI Act should be interpreted. General-purpose AI models (e.g., large language models) are excluded from the scope of the current guideline. The essential components that an AI system must have under Article 3 of the AI Act are as follows:
- Machine-Based System: An AI system operates through both software and hardware components. The full life cycle of advanced AI systems is based on machines that may involve multiple hardware or software elements.
- Designed to Operate with Varying Levels of Autonomy: The system must be capable of operating with varying degrees of autonomy from human intervention. Autonomy here implies that the system is designed to function with a certain level of independence from human input. Therefore, systems designed solely for manual operation are excluded from the definition.
- Exhibit Adaptiveness After Deployment: Some systems may have self-learning capabilities that allow their behavior to evolve. However, this is not mandatory. The ability to learn automatically, discover new patterns, or identify relationships in data beyond the initial training set is optional and not a determining factor in classifying a system as AI.
- AI System Objectives: AI systems should be designed to operate based on one or more objectives. These objectives can be explicit or implicit. Explicit goals are those directly coded into the system by the developer, such as optimizing a cost function or a cumulative reward. Implicit goals are not directly specified but can be inferred from the system’s behavior or assumptions. These goals may arise from training data or interactions with the environment.
- Capability to Generate Outputs: An AI system must be able to generate outputs—such as predictions, recommendations, or decisions—based on the inputs it receives. AI systems must be distinguishable from simpler traditional software systems by their inference capabilities. Techniques that enable such inference include machine learning approaches that can understand how to achieve specific goals, as well as logic- and knowledge-based methods that infer from encoded information or symbolic representations. The guideline details these techniques and output types. Systems that merely enhance mathematical optimization, process basic data, rely on classical heuristics, or offer simple predictions are excluded from the definition.
- Ability to Influence Physical or Virtual Environments: AI systems should not be passive; they must actively influence the environments in which they are deployed. The term “physical or virtual environments” refers to both tangible physical objects (e.g., robotic arms) and digital environments, including data flows and software ecosystems.
According to the guideline, these elements do not need to be present in all AI systems at all times. For instance, some systems may exhibit learning capabilities during development but not after deployment.
It is also important to note that this guideline is non-binding. The final authority for interpreting the definition of AI systems lies with the Court of Justice of the European Union (CJEU). Whether a system qualifies as AI should not be determined based solely on general rules but through technical and functional evaluations on a case-by-case basis. Therefore, compliance processes based solely on this guideline will not be sufficient justification in court.
This guideline is of significant importance for companies and providers trying to determine whether their systems fall under the AI Act. If you are unsure whether a system you develop or use is subject to the AI Act, a thorough analysis is necessary to avoid potential penalties.
Prohibited AI Systems
Under Article 5 of the AI Act, several AI applications are prohibited across the EU. These include:
- Systems using subliminal manipulation or deceptive techniques: AI systems that influence users’ decisions unconsciously and significantly impair their ability to make informed choices are banned. These include systems that distort decision-making processes or cause harm through manipulation, such as personalized persuasive messaging based on personal data or exploiting individual vulnerabilities.
- AI applications exploiting the vulnerabilities of individuals (e.g., targeting children or people with disabilities): Systems that manipulate the decisions of vulnerable individuals due to age, disability, or socio-economic status are banned due to their potential to exacerbate social inequalities.
- Social scoring systems: AI systems that evaluate individuals based on their behavior, personality traits, or predicted tendencies, leading to discriminatory treatment, are prohibited due to their potential to significantly harm fundamental rights and freedoms.
- Systems assessing the risk of criminal behavior: Systems that use profiling and personal characteristics to predict criminal behavior are generally prohibited, with limited exceptions.
- Certain facial recognition systems: The development, provision, or use of AI systems that collect facial images indiscriminately from the internet or surveillance footage to build facial recognition databases is banned.
- Real-time remote biometric identification (RBI) systems: Their use is largely prohibited outside of law enforcement.
- Emotion recognition and biometric categorization systems: Systems that assess emotions or classify individuals based on race, religion, political views, or sexual orientation are banned unless used for medical or security purposes. Emotion recognition systems in workplaces or educational settings are particularly at risk of violating this provision, especially those used in AI-based hiring interviews.
The prohibitions in Article 5 apply not only to AI systems developed for specific prohibited purposes but also to systems whose use may result in prohibited applications. For example, if a chatbot manipulates users or leads them to harmful decisions through deceptive techniques, the provider must:
- Adopt ethical and secure design principles,
- Integrate technical and other safety measures,
- Clearly disclose prohibited uses in the terms of service and inform users accordingly,
- Implement transparency, user control, and human oversight mechanisms.
Additionally, providers must take proactive measures to prevent the use of their systems for prohibited purposes. For example, an emotion recognition system should not be used in workplaces or schools unless explicitly allowed for medical or security reasons. Providers must explicitly prohibit such uses in their terms of service and inform users.
“Placing on the Market,” “Putting into Service,” and “Use” of AI Systems
Except for RBI systems, placing prohibited AI systems on the market, putting them into service, and using them within the EU are all banned. For RBI systems, only their use is prohibited.
- Placing on the market refers to making an AI system available in the EU market for the first time, whether for commercial purposes or not. This includes distribution through APIs, cloud platforms, direct downloads, physical copies, or embedded systems.
- Putting into service refers to deploying an AI system for its intended use by the deployer or for internal use. For instance, if a public body develops and uses a fraud detection system internally, it is considered as putting it into service.
- Use refers to any application or operation of an AI system after it has been placed on the market or put into service. Even integration into larger systems or use for unintended purposes falls under this definition.
Although providers must consider intended and foreseeable misuse before placing a system on the market, deployers remain responsible for compliance during use. For instance, if an employer uses an AI system to analyze employees’ emotions, the deployer remains liable even if the provider explicitly prohibits such use in the contract.
Research, testing, or development activities before market placement or deployment are not subject to the regulation. However, real-world testing is not exempt. Similarly, even if providers offer their systems as open-source, they are still subject to the regulation if the system involves prohibited applications.
Some high-risk AI systems may under certain conditions fall into the category of prohibited systems. In that case, the obligations related to prohibited systems will apply. For example, risk assessment tools used in credit scoring or health insurance may be classified as high-risk systems unless they lead to unfair social scoring, which would place them under Article 5 prohibitions. Thus, distinguishing between high-risk and prohibited systems is essential. Note also that the AI Act applies horizontally across sectors and complements other EU laws, such as GDPR, consumer protection, labor law, and product safety. An AI system not prohibited under the AI Act may still be unlawful under other EU legislation.
Compliance and Penalties
The enforcement of Article 5 prohibitions is handled by national market surveillance authorities designated by EU Member States and the European Data Protection Supervisor (EDPS). These authorities have powers similar to those under other product safety regulations in the EU.
Authorities can conduct inspections either proactively or in response to complaints. EU Member States must designate national authorities responsible for enforcement by August 2, 2025. If a prohibited AI system affects more than one Member State, the relevant authority must inform the European Commission and other national authorities.
The AI Act adopts a tiered penalty system, with higher penalties for violations of Article 5. Providers or deployers of prohibited AI systems can face fines of up to EUR 35 million or 7% of their global annual turnover, whichever is higher.
For violations by public bodies or government agencies, administrative fines can go up to EUR 1.5 million.
Although penalties have not yet come into force, non-compliance still carries risks. The European Commission and the AI Office have stated that they are monitoring systems that violate the regulation. Recently, the AI Office announced it is closely watching the system known as Deepsake.[8] Therefore, it is essential to prepare compliance processes accordingly. It should be noted that the guideline on prohibited systems has not yet been officially enforced and will only become binding upon implementation.
Conclusion and Evaluation
The AI Act is considered a critical regulation for ensuring that AI technologies are developed and used in an ethical, trustworthy, and rights-respecting manner. The prohibitions and transparency obligations aim to minimize the potential negative impacts of AI systems on society. Companies and institutions should begin taking the necessary steps now to ensure compliance and avoid potential sanctions.
This article aimed to provide a general overview. However, since the Commission’s published guidelines are detailed and extensive, we plan to examine these documents in greater depth in separate articles in the near future.
[1] European Commission (4 February 2025) Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act) https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act.
[2] European Commission (4 February 2025) Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act) https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act.
[3] European Commission (4 February 2025) Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act) https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-ai-system-definition-facilitate-first-ai-acts-rules-application
[4] “Providers” are individuals, companies, or public institutions that develop AI systems, place them on the EU market, or put them into service under their own name. Even providers established outside the EU are subject to the regulation if they make their systems available on the EU market or if the systems are used within the EU.
[5] “Deployers” are individuals, institutions, or companies that use AI systems under their own authority. Under the regulation, if an AI system is used within the EU, the fact that the provider or deployer is located outside the EU does not exempt them from responsibility. For example, if an AI system developed by a company outside the EU is used by an organization within the EU, the users of that system are obliged to comply with the regulation. It is important to note that the regulation does not apply to individual deployers using AI systems exclusively in the course of a purely personal or non-professional activity.
[6] European Commission (4 February 2025) Living repository to foster learning and exchange on AI literacy. https://digital-strategy.ec.europa.eu/en/library/living-repository-foster-learning-and-exchange-ai-literacy
[7] European Commission (4 February 2025) Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act) https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-ai-system-definition-facilitate-first-ai-acts-rules-application
[8] China’s DeepSeek faces EU AI Office regulatory scrutiny across range of risks | MLex | Specialist news and analysis on legal risk and regulation. (29 January 2025). https://www.mlex.com/mlex/articles/2290332/china-s-deepseek-faces-eu-ai-office-regulatory-scrutiny-across-range-of-risks