Foundation Models, Generative AI and the EU AI Act

< BACK TO ARTICLE

Supplementary Update  |  July 2023

Annex III (high risk systems referred to in Art 6(2)) of the EU (AI) Act
Text in italics reflects European Parliament drafting.
The AI systems specifically referred to in under points 1 to 8a, 8aa and 8ab stand for critical use cases and are each considered to be high risk AI systems pursuant to Article 6(2), provided that they fulfil the criteria set out in that Article.

1. Biometric and biometrics-based systems

(a) AI systems intended to be used for biometric identification of natural persons, with the exception of those mentioned in Article 5;

(aa) AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those mentioned in Article 5;

Point 1 shall not include AI systems intended to be used for biometric verification whose sole purpose is to confirm that a specific natural person is the person he or she claims to be.

Paragraph 1 – point 2 – point a

(a) AI systems intended to be used as safety components in the management and operation of road, rail and air traffic unless they are regulated in harmonisation or sectoral law.

Paragraph 1 – point 2 – point aa

(aa) AI systems intended to be used as safety components in the management and operation of the supply of water, gas, heating, electricity and critical digital infrastructure;

Paragraph 1 – point 3 – point a

(a) AI systems intended to be used for the purpose of determining access or materially influence decisions on admission or assigning natural persons to educational and vocational training institutions;

Paragraph 1 – point 3 – point b

(b) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to those institutions;

Paragraph 1 – point 3 – point b a (new)

(ba) AI systems intended to be used for the purpose of assessing the appropriate level of education for an individual and materially influencing the level of education and vocational training that individual will receive or will be able to access;

Paragraph 1 – point 3 – point b b (new)

(bb) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of/within education and vocational training institutions;

Paragraph 1 – point 4 – point a

(a) AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests;

Paragraph 1 – point 4 – point b

(b) AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behaviour or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships.

Paragraph 1 – point 5 – point a

(a) AI systems intended to be used by or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, including healthcare services and essential services, including but not limited to housing, electricity, heating/cooling and internet, as well as to grant, reduce, revoke, increase or reclaim such benefits and services;

Paragraph 1 – point 5 – point b

(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud.

Paragraph 1 – point 5 – point b a (new)

(ba) AI systems intended to be used for making decisions or materially influencing decisions on the eligibility of natural persons for health and life insurance;

Paragraph 1 – point 5 – point c

(c) AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by police and law enforcement, firefighters and medical aid, as well as of emergency healthcare patient triage systems;

Paragraph 1 – point 6 – point a

Deleted by European Parliament.

Paragraph 1 – point 6 – point b

(b) AI systems intended to be used by or on behalf of law enforcement authorities or by Union agencies, officer or bodies in support of law enforcement authorities as polygraphs and similar tools insofar as their use is permitted under relevant Union and national law;

Paragraph 1 – point 6 – point c

Deleted by European Parliament.

Paragraph 1 – point 6 – point d

(d) AI systems intended to be used by or on behalf of law enforcement authorities or by Union agencies, offices or bodies in support of law enforcement authorities to evaluate the reliability of evidence in the course of investigation or prosecution of criminal offences or, in the case of Union agencies, offices or bodies, as referred to in Article 3(5) of Regulation (EU) 2018/1725;

Paragraph 1 – point 6 – point e

Deleted by European Parliament.

Paragraph 1 – point 6 – point f

(f) AI systems intended to be used by or on behalf of law enforcement authorities or by Union agencies, offices or bodies in support of law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences or, in the case of Union agencies, offices or bodies, as referred to in Article 3(5) of Regulation (EU) 2018/1725;

Paragraph 1 – point 6 – point g

(g) AI systems intended to be used by or on behalf of law enforcement authorities or by Union agencies, offices or bodies in support of law enforcement authorities for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.

Paragraph 1 – point 7 – point a

(a) AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies as polygraphs and similar tools insofar as their use is permitted under relevant Union or national law.

Paragraph 1 – point 7 – point b

(b) AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;

Paragraph 1 – point 7 – point c

(c) AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features.

Paragraph 1 – point 7 – point d

(d) AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies to assist competent public authorities for the examination and assessment of the veracity of evidence in relation to applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status;

Paragraph 1 – point 7 – point d a (new)

(da) AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies in migration, asylum and border control management to monitor, surveil or process data in the context of border management activities, for the purpose of detecting, recognising or identifying natural persons;

Paragraph 1 – point 7 – point d b (new)

(db) AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies in migration, asylum and border control management for the forecasting or prediction of trends related to migration movement and border crossing;

Paragraph 1 – point 8 – point a

(a) AI systems intended to be used by a judicial authority or administrative body or on their behalf to assist a judicial authority or administrative body in researching and interpreting facts and the law and in applying the law to a concrete set of facts or used in a similar way in alternative dispute resolution.

Paragraph 1 – point 8 – point a a (new)

(aa) AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistic point of view.

Paragraph 1 – point 8 – point a b (new)

(ab) AI systems intended to be used by social media platforms that have been designated as very large online platforms within the meaning of Article 33 of Regulation EU 2022/2065, in their recommender systems to recommend to the recipient of the service user-generated content available on the platform.
< BACK TO ARTICLE