The EU AI Act came into force on 1 August 2024 and most of its provisions will apply after a two-year implementation period (i.e. from August 2026) during which time various supplementary legislation, standards and guidance will be published to assist organisations' compliance efforts. One important exception is the ban on prohibited AI systems set out in Article 5, which comes into force on 2 February 2025

Article 5 is operator agnostic so (unlike other requirements under the AI Act), it does not change depending on the operator's role or identity. It therefore applies, among other things, to the placing on the market, putting into service or use of: 

  • AI systems for social scoring by both public and private actors, and not only by or on behalf of public authorities as initially envisaged. 
  • AI systems to infer emotions in the areas of workplace and education institutions. 
  • AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. 
  • AI systems linked to assessing or predicting the risk of a natural person committing a criminal offence, based solely on profiling or assessing personality traits and characteristics.
  • Biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. 

Other prohibited practices relate to the deployment of subliminal, manipulative or deceptive techniques; the exploitation of vulnerabilities due to age, disability or a specific social or economic situation; and the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes. 

Prohibited practices are considered to pose intolerable risks to foundational values due to their potential negative impacts. Some of them have narrow exceptions where the use of such AI systems is considered critical to protect a significant public interest that outweighs the potential risks involved (for example, real-time remote biometric identification systems in publicly accessible spaces for law enforcement is allowed in certain circumstances). 

A breach of Article 5 may attract a fine up to the higher of EUR 35,000,000 and 7% of global annual turnover. 

There exists a compliance challenge for platform service providers who develop and offer general-purpose AI technologies (eg Google Cloud AI AutoML, Microsoft Azure Machine Learning, TensorFlow, and Amazon SageMaker) that have a variety of possible applications. Whilst most of the use by their customers is in practice unlikely to fall within the remit of the prohibited AI practices regime, there is a risk of non-compliance by a minority of customers. It can be tricky for providers to calibrate general-purpose technologies to restrict prohibited uses without also limiting the legitimate scenarios for which general purpose technologies are to be used. Many providers are sensibly developing Codes of Conduct that help facilitate customer compliance and demonstrate a responsible approach to regulators and are updating their customer contracts so that customers clearly understand that they cannot engage in any prohibited AI practices. 

In November 2024, the AI Office invited businesses that provide or deploy AI systems, as well as a range of other stakeholders, to answer questions relating to prohibited practices established (including where further clarification and examples are needed as to whether an AI system is in the scope of the prohibition or not). From the responses, the AI Office will develop guidelines intended to help providers and deployers achieve compliance. The Guidelines are due to be adopted in "early 2025". We hope that they will be issued before 2 February 2025, so watch this space for an update.

 

EU AI Act: are you ready for 2 February 2025 – 
the ban on prohibited AI systems comes into force!

Authors