Skip to main content
All CollectionsPrivacy and policies
Complying with the EU AI Act’s Prohibited Practices Requirements
Complying with the EU AI Act’s Prohibited Practices Requirements
Updated over a month ago

Article 5 of the EU AI Act prohibits certain uses of AI technology that poses an unacceptable level of risk. While we have certain technical safeguards in place, as a provider of general purpose AI technology, OpenAI also expects our customers (both individual users, developers, and enterprise customers) to comply with applicable legal requirements, including to avoid engaging in any activities that are illegal under the EU AI Act if they are located or established in the EU. This article is intended to provide clarity on what these prohibited practices are; it is not intended to serve as legal advice about how you should comply with your own obligations under the EU AI Act.

The following uses of AI are prohibited under the EU AI Act, and – as we explain further in Usage Policies and other terms – you should not use any of OpenAI’s services to engage in them:

  1. Engaging in subliminal, purposefully manipulative, or deceptive techniques that are intended to materially distort the behavior of an individual or group of people – meaning that their ability to make an informed decision is impaired – in a way that is likely to cause harm to that individual or others.

  2. Exploiting the vulnerabilities of an individual or group of people based on their age, disability, social situation, or economic situation in a way that materially distorts their behavior and is likely to cause significant harm to that individual or others.

  3. Assigning a social score (based on classification of individuals or groups over a period of time based on their behavior, their personal characteristics, or inferred/predicted characteristics) that either:

    1. Leads to unfavorable treatment of individuals or groups in unrelated social contexts, or

    2. Leads to unfavorable treatment of individuals or groups that is unjustified or disproportionate to their behavior.

  4. Conducting risk assessments of individuals based solely on profiling or assessing their personality traits to predict whether they may commit a crime.

  5. Creating or expanding facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

  6. Inferring emotions in the workplace or educational settings (unless a medical or safety exception applies).

  7. Categorizing individuals based on their biometric data to infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation (unless based on lawfully acquired data sets in a context such as authorized law enforcement).

  8. Conducting “real time,” remote biometric identification in publicly accessible spaces for law enforcement purposes, unless it is strictly necessary for:

    1. A targeted search for missing persons or specific victims of abduction, human trafficking, or sexual exploitation;

    2. The prevention of a specific, substantial, and imminent threat to the life or physical safety of an individual or of a terrorist attack;

    3. The identification of suspects of certain criminal offenses.

For more information about how our services may be used and what other activities are prohibited, please consult OpenAI’s Usage Policies and other terms. The European Commission has also released guidance on the prohibited practices that may help you assess your own legal obligations and avoid these practices in your systems/products.

Did this answer your question?