Artificial Intelligence (AI) is rapidly becoming one of the most exciting and transformative technologies of the digital age, expanding quickly in both business and personal use. While it offers significant opportunities for startups, AI companies, businesses integrating AI into their operations, and individual users, it also brings serious ethical and legal responsibilities.
As of February 2, 2025, the use of certain AI systems will be prohibited. Let this serve as an early warning: if your AI system resembles or matches any of the categories listed below, we strongly recommend seeking professional guidance to ensure it is removed from this risk classification. Otherwise, your application may be banned from use throughout Europe and by its citizens and subsidiaries.
The European Union’s Artificial Intelligence Act (AI Act) prohibits specific AI applications to ensure responsible use of the technology and prevent potential harm. These bans aim to define ethical boundaries that must be respected by companies and individuals using AI, thereby fostering a safe and fair AI ecosystem. The restrictions outlined in the regulation are designed to ensure that AI innovation goes hand-in-hand with the protection of human rights and public trust.
Let’s take a closer look at which areas of AI are banned, the reasoning behind these prohibitions, and how ethical use of AI can make a real difference in the business world.
1. Subliminal Manipulation Techniques
AI provides revolutionary tools for enhancing marketing strategies and personalizing user experiences through behavioral analysis. However, these capabilities can blur ethical boundaries. One of the AI Act’s most striking bans addresses the use of subliminal manipulation techniques to influence individuals’ decisions without their awareness.
Imagine an e-commerce platform using an AI system that analyzes a customer’s shopping behavior and targets their vulnerabilities. The system delivers subliminal messages that nudge the customer into buying items they wouldn’t otherwise choose. While the consumer may believe they made a conscious choice, they are actually being manipulated by AI.
The AI Act explicitly prohibits such systems to protect individual autonomy and the right to make informed decisions. Companies using manipulation for short-term gains risk serious long-term damage to customer trust and their own reputations.
2. AI Systems Targeting Vulnerable Individuals (Abuse of Technological Power)
Thanks to AI’s powerful data processing capabilities, identifying and targeting individuals’ weaknesses has become easier. However, using this power to exploit vulnerable individuals is categorically unacceptable. The AI Act bans systems that manipulate groups such as children, the elderly, or those who are socioeconomically disadvantaged.
For example, consider an AI-powered financial advisory tool that targets elderly individuals with limited digital literacy. The system could steer them—without their full understanding—into high-risk investments, causing them financial harm. Similarly, a game that uses AI to analyze children’s behavior may push them toward unnecessary in-game purchases. These practices are both ethically and legally problematic.
Banning such applications ensures AI is used fairly and equitably. Companies and startups should see technology not just as a profit-generating tool but also as a means of creating societal value. Embracing ethical innovation will strengthen business reputation and increase customer loyalty over time.
Social Scoring Systems
Made famous by China’s social credit system, social scoring is one of the most dangerous potential uses of AI. Systems that score individuals based on social behavior, personal traits, or predicted tendencies—and use those scores to confer advantages or disadvantages—are expressly banned under the AI Act.
Imagine a bank assigning credit scores based on users’ social media activity and shopping behavior. The system could evaluate a person positively or negatively based on their social circles or online presence. This is a clear violation of personal rights and introduces discrimination. Likewise, an employer using AI to evaluate job applicants based on their social media activity is engaging in prohibited social scoring.
The ban on social scoring ensures that individuals are not unfairly judged based on historical data or online interactions. Companies must evaluate people based on fair and objective criteria—not social behavior or digital footprints.
4. Crime Prediction Based on Profiling
AI has enormous potential in legal fields, particularly for crime prediction. However, attempting to predict criminal behavior based solely on individuals’ past actions or personality traits can create bias and violate human rights. The AI Act prohibits these types of profiling-based crime prediction systems to preserve individuals’ right to a fair trial.
Such systems may use historical crime data to predict whether someone is likely to commit a crime. But they often rely on biased datasets, unfairly targeting specific ethnic or social groups. This undermines justice and the impartial application of the law.
By banning these systems, the AI Act ensures that AI plays only a supportive role in legal processes. Innovation can help improve decision-making, but it must not be used in ways that violate fundamental principles of justice.
5. Creating Unauthorized Facial Recognition Databases: Safeguarding Privacy
Facial recognition is one of AI’s most controversial applications. Collecting data from the internet or CCTV footage to create facial recognition databases without consent constitutes a serious violation of privacy. The AI Act strictly prohibits the creation of such databases.
For example, a tech company scraping images from social media without permission and using them to train facial recognition models would face serious legal consequences. These practices jeopardize individuals’ freedom and right to privacy.
When deployed in public spaces, facial recognition creates a constant sense of surveillance, eroding social trust and personal freedom. The AI Act aims to protect the public from this kind of intrusion.
6. Emotion Recognition in Sensitive Environments: The Boundaries of Workplace Surveillance
AI’s emotion recognition capabilities are increasingly used in applications ranging from personalized customer experiences to monitoring employee emotions in the workplace. However, using this technology in sensitive environments like offices and educational institutions can create pressure and stress. The AI Act bans emotion recognition in such contexts, except for medical or security purposes.
Continuous monitoring of emotional states in the workplace can negatively impact employee performance and well-being. Similarly, using AI to analyze students’ emotions in educational settings can result in stress and psychological harm.
This ban protects individuals’ emotional privacy. Technology should be used ethically—to support well-being, not to police or manipulate emotions.
7. Discriminatory Biometric Categorization
Biometric data is among the most sensitive and valuable data types processed by AI. However, misuse of this data can have severe consequences. The AI Act prohibits AI systems that use biometric data to infer sensitive characteristics such as race, religion, political views, or sexual orientation—and to categorize individuals accordingly.
For example, if a company uses biometric data in recruitment to infer candidates’ political beliefs or religious affiliations, this would be a serious ethical and legal violation. Such practices can restrict access to employment and increase social discrimination.
Given the sensitivity of biometric data, it must only be used for legitimate and ethical purposes. Companies must adopt a rigorous approach to data protection and avoid discriminatory algorithms.
8. Real-Time Biometric Identification in Public Spaces
One of the most hotly debated topics in recent years is the use of real-time biometric identification (RBI) systems in public spaces. Facial recognition, in particular, creates the potential for a surveillance society. Under the AI Act, these systems may only be used under exceptional circumstances—such as to prevent serious threats to public security or locate missing persons.
Such use requires prior legal approval and must be limited in both time and geographic scope. Even in emergencies, temporary use still requires proper authorization.
The restrictions on RBI systems aim to protect individual privacy and freedom of movement. Companies and public institutions must prioritize transparency and legal compliance when deploying such technologies..
9. Conclusion
The systems described above can be considered prohibited either individually or based on the capabilities of a broader AI model. Whether you train an open-source model or use a third-party model through an API or credits system, you are responsible for the outputs it produces.
Therefore, it is critical to limit your AI’s capabilities within the boundaries defined above, document these limitations, and regularly audit them. Technology doesn’t have to become a weapon—used ethically, it can be a force for progress.