On 4 February 2025, the European Commission released its draft guidelines in relation to prohibited AI practices (prohibited practices) under Article 5 of the EU AI Act (Draft Guidelines), which became applicable on 2 February 2025. See our recent article here for an overview of the prohibited practices.
The Draft Guidelines, which have been approved by the Commission but are yet to be formally adopted, will be a welcome sight for many businesses currently assessing areas of risk ahead of 2 August 2025 (when fines for non-compliance with the provisions on prohibited practices become enforceable). Intended to assist with the effective application of the AI Act, the Draft Guidelines provide additional clarification for businesses on the Commission's interpretation of the prohibitions, including a number of practical examples. These illustrations will be particularly helpful when reviewing AI practices likely to be of the highest risk to businesses (e.g. those used in the workplace), along with shedding light on how AI use in commercial practices such as advertising could in some cases end up under the scope of a prohibited practice (e.g. through manipulation and/or exploitation of vulnerable individuals). Although the Draft Guidelines are not binding, they seek to ensure the consistent and effective application of the AI Act across the EU, and help businesses comply with its requirements.
Prohibited AI Practices in the workplace
Of the eight prohibitions under Article 5 of the AI Act, the most relevant for many businesses is the ban on emotion recognition systems in the workplace (Article 5 (1)(f)). Importantly, only AI systems identifying or inferring emotions or intentions based on biometric data constitute emotion recognition systems (based on the definition in Article 3(39) of the Act). In the Draft Guidelines, the Commission provides a number of examples of use cases considered to involve emotional inference in the workplace and which are therefore prohibited, for example:
- AI systems inferring emotions from key stroke (way of typing), facial expressions, body postures or movements;
- Using webcams and voice recognition systems to track employees' emotions, such as anger (e.g. to monitor how call centre employees interact with customers);
- AI systems which monitor emotional tone in hybrid work settings, by inferring emotions from voice and imagery, for instance to serve as a conflict prevention technique;
- Use of cameras in a supermarket to track the emotions of staff on a shop floor.
However, the Draft Guidelines also clarify that AI systems which identify that a person is smiling (e.g. a TV broadcaster using a device that tracks how many times its news presenters smile to the camera) is not emotion recognition and that these use cases therefore do not fall within the scope of the prohibition (in contrast, using this information to conclude that someone is happy would be considered emotion recognition). Similarly, an AI system inferring emotions from written text (e.g. content/sentiment analysis) to define the style or the tone of a certain article is not based on biometric data and therefore does not fall within the scope of the prohibition.
In addition, the Draft Guidelines note that using an AI system to identify whether someone is sick or to infer a professional pilot or driver's fatigue to alert them and avoid accidents do not constitute emotion recognition, since emotion recognition does not include physical states such as pain or fatigue.
There is also an exception under the AI Act to this prohibition where the system is deployed solely for medical and/or safety reasons. In the Draft Guidelines, the Commission has clarified that given the AI Act's objective of ensuring a high level of protection for fundamental rights, this exception should be narrowly interpreted. Notably, the Draft Guidelines clarify that the exception does not cover the use of emotion recognition systems to detect general aspects of wellbeing. Therefore, an AI system intended to detect burnout or depression at the workplace or in education institutions would not be covered by the exception and would remain prohibited.
While the majority of AI systems used in the workplace are unlikely to be prohibited under the AI Act, businesses using AI systems in the HR or recruitment context should still ensure their usage does not fall under the Article 5 prohibition, by assessing current use cases concerning employment or recruitment and mapping risk areas in light of the Draft Guidelines. In assessing compliance, it is also important to remember a broad interpretation of 'workplace' is being adopted (not limited to a physical work-related location), and the prohibited AI practices in this category extend to candidates in a recruitment cycle or probationary period, not just permanent employees of a business.
Can AI-driven advertising ever constitute a prohibited practice?
Recital 29 of the EU AI Act makes it clear that AI-based advertising that complies with applicable law should not, in itself, be seen as constituting a harmful AI-enabled practice. However, a read of the Draft Guidelines highlights that there may be occasions where advertising-related use cases could amount to prohibited practices under Article 5.
Article 5 prohibits both the use of AI systems involving subliminal, manipulative or deceptive techniques (Article 5(1)(a)) and AI systems which exploit the vulnerabilities of individuals due to age, disability or their social or economic situation (Article 5(1)(b)), where the AI system has the objective or effect of materially distorting a person's behaviour in a manner that causes or is reasonably likely to cause significant harm. While these prohibitions are unlikely to catch most standard commercial advertising, the Draft Guidelines provide a number of example use cases relevant to advertising which would be prohibited under Article 5:
- For instance, if an AI system uses rapid image flashes (which are technically visible but flashed too quickly for the conscious mind to register) to influence purchasing decisions, this may fall under the prohibition in Article 5(1)(a). This would also be the case for auditory subliminal messages (sounds or verbal messages at low volumes or masked by other sounds, influencing the listener without conscious awareness) or embedded images (which are hidden within other visual content but may still be processed by the brain and influence behaviour).
- AI-powered media which deliberately targets vulnerable individuals in order to exploit their vulnerability and influence them to make decisions they would not otherwise have taken and which consequently results in harm, may fall under the prohibition in Article 5(1)(b). For instance, this could include AI-predictive algorithms used to target people who live in low-income postcodes and are in a difficult financial situation with ads for predatory financial products, exploiting their susceptibility to such ads and causing them significant financial harm.
- Similarly, AI systems that exploit reduced cognitive vulnerabilities of older people e.g. by targeting them with expensive medical treatments, unnecessary insurance policies or deceptive investment schemes, which may then lead to significant loss of savings, increased debt, and emotional distress, would also fall under the Article 5(1)(b) prohibition.
In reality, these prohibitions will be complementary to existing legislation in relation to advertising and consumer protection. Businesses that deploy AI in their advertising should not therefore be too concerned by the Draft Guidelines, so long as their advertising is not reliant on subliminal, deceptive or manipulative techniques and/or attempting to subvert individual autonomy or exploit vulnerabilities in harmful ways.
Advertising that uses AI to personalise user content based on transparent algorithms and user preferences (without employing manipulative/exploitative techniques), should not generally fall under the prohibition according to the Commission.
If you would like any guidance in relation to the EU AI Act and/or the prohibited practices, your compliance and/or possible areas of risk, please do get in touch with a member of the LS team.