Our Legal Services on procuring, using and adopting AI may be accessed HERE >
The European Parliament (“EP”) adopted a heavily revised (by the EP) draft of the EU AI Act, on the 14th of June 2023. The EP draft of the EU AI Act, constituted their published position for the purposes of the trialogue negotiations with the Council and the Commission. Agreement on and adoption of the EU AI Act is expected by end of 2023 at the latest.
This Q&A reflects the EP version of the EU AI Act.
Note: words in bold below (excluding titles) are for the most part defined or explained in the EU AI Act. For space reasons they are not defined here.
The EU AI Act proposes a risk-based approach, differentiating between AI systems or uses of AI systems that:
As a result of the EP changes, the scope of the EU AI Act is wider. Suppliers in third countries need to especially note what is said below.
The EU AI Act now, subject to some exceptions, applies to:
For high-risk AI systems that are safety components of products or systems, or which are themselves products or systems and that fall, within the scope of harmonisation legislation listed in Annex II - Section B, Annex II (Section B - List of other Union harmonisation legislation); only Article 84 of the EU AI Act will apply (Art 2(2)).
The EU AI Act now incorporates substantially the OECD definition of an AI system as follows:
'Artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments (Article 3(1) of the EU AI Act).
The original definition was complemented by text in Annex I (Artificial Intelligence Techniques and Approaches). However the EP draft of the EU AI Act has deleted the Annex I text.
A new Article 4a (general principles applicable to all AI systems) require operators to make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles of (a) human agency and oversight (b) technical robustness and safety (c) privacy and data governance (d) transparency (e) diversity, non-discrimination and fairness and (f) social and environmental well-being. The Article explains what these terms mean.
Recital (9a) of the EU AI Act provides background to this drafting by reference to the Charter of Fundamental Rights of the EU.
Additionally, a new Article 4b (AI literacy) imposes obligations on providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.
The EU AI Act explains that AI literacy, as defined, allows providers, users and affected persons, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause (Recital 9c truncated).
The actors involved in the standardisation process must take into account the Article 4(a) EU IA Act, general principles, in accordance with Art 5 (Stakeholder participation in European standardisation) Article 6 (Access of SMEs to standards) Article 7 (Participation of public authorities in European standardisation) of Regulation (EU) No 1025/2012 (Standardisation Regulation) (Article 40(1c) truncated).
In a heavily revised Art 5 (extracts only and paraphrased below), Art 5 now prohibits (subject to exceptions which not reproduced here) the placing on the market, the putting into service or use of an AI system:
This Article will not affect the prohibitions that apply where an artificial intelligence practice infringes another Union law, including Union law on data protection, non discrimination, consumer protection or competition (Art 5(1)(a)).
An AI system will be considered high-risk where:
Such high risk AI products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft, equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in-vitro diagnostic medical devices (Recital 30 truncated and paraphrased).
In addition to the high-risk AI systems referred to in Art 6(1), AI systems falling under one or more of the critical areas and use cases referred to in Annex III will be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Where an AI system falls under Annex III point 2, it will be considered to be high-risk if it poses a significant risk of harm to the environment (Art 6(2)).
The Commission must, six months prior to the entry into force of the EU AI Act, after consulting the AI Office and relevant stakeholders, provide guidelines clearly specifying the circumstances where the output of AI systems referred to in Annex III would pose a significant risk of harm to the health, safety or fundamental rights of natural persons or cases in which it would not (Article 6(2)).
Where providers falling under one or more of the critical areas and use cases referred to in Annex III consider that their AI system does not pose a significant risk as described in Art 6(2), they must submit a reasoned notification to the national supervisory authority that they are not subject to the requirements of Title III Chapter 2 of the EU AI Act (Article 6(2a)).
Where the AI system is intended to be used in two or more Member States, that notification must be addressed to the AI Office. Without prejudice to Article 65 (Procedure for dealing with AI systems presenting a risk at national level), the national supervisory authority must review and reply to the notification, directly or via the AI Office, within three months if they deem the AI system to be misclassified (Art 6(2a)).
Providers that mis-classify their AI system as not subject to the requirements of Title III Chapter 2 (Arts 8 to 15) and place it on the market before the deadline for objection by national supervisory authorities, will be subject to fines under Article 71 (Article 6(2)(b)).
National supervisory authorities must submit a yearly report to the AI Office detailing the number of notifications received, the related high risk areas at stake and the decisions taken concerning received notification (Article 6(2)(c)).
Annex III has been extensively revised by the European Parliament and the revised text is accessible HERE >. Note the additions to Annex III now include:
AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistic point of view.
AI systems intended to be used by social media platforms that have been designated as very large online platforms within the meaning of Article 33 of Regulation EU 2022/2065 (DSA) in their recommender systems to recommend to the recipient of the service user-generated content available on the platform.
Title III Chapter 2 (Arts 8 to 15) (Chapter 2) of the AI Act, contains complex specific rules for AI systems that create a high risk to the health and safety or fundamental rights of natural persons.
More particularly, Art 8 Compliance, Art 9 Risk management system, Art 10 Data and data governance, Art 11 Technical documentation, Art 12 record-keeping, Art 13 Transparency and provision of information to users, Art 14 Human Oversight, and Art 15 Accuracy, robustness and cybersecurity, set out the legal requirements for high-risk AI systems.
These requirements, the Commission argue, are already state-of-the-art for many diligent operators and are the result of two years of preparatory work, derived from the Ethics Guidelines of the High-Level Expert Group on Artificial Intelligence (HLEG29), piloted by more than 350 organisations.
Note Article 10 (Data and data governance) has been substantially revised by the European Parliament and is essential reading for providers or deployers of high risk AI systems. A summary of it may be accessed HERE >.
The substance of provider obligations is found in Art 16. Amongst the most important provider obligations are to:
Additionally, obligations are placed on providers of high risk AI systems, as regards a quality management system (Art 17), to draw up technical documentation (Art 18), conformity assessment (Art ) to keep logs automatically generated for at least six months (Art 20) take corrective actions (as set out) (Art 21) and where the high risk AI system presents a risk, within the meaning of Art 65(1), immediately inform the national competent authorities of the Member States in which it made the system available and, where applicable, the notified body that issued a certificate for the high-risk AI system in particular of the non-compliance and of any corrective actions taken (Art 22 Duty of information).
Providers and where applicable, deployers of high-risk AI systems must, upon a reasoned request by a national competent authority (NCA) or where applicable, by the AI Office or the Commission, provide them with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of Title III.
Upon a reasoned request by an NCA or, where applicable, by the Commission, providers and, where applicable, deployers must also give the requesting national competent authority or the Commission, as applicable, access to the logs automatically generated by the high-risk AI system, to the extent such logs are under their control.
Providers and deployers must establish and document a post-market monitoring system (Art 61(1)), which must be based on a template post-market monitoring plan contained in an implementing act to be adopted by the EU Commission (Art 61(3) extract).
The post-market monitoring system must amongst other things, allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2 (Art 61(2) extract).
Providers and, where deployers have identified a serious incident, deployers of high-risk AI systems placed on the Union market must report any serious incident of those AI systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the national supervisory authority of the Member States where that incident or breach occurred (Art 61(1)).
Such notification must be made without undue delay after the provider, or, where applicable the deployer, has established a causal link between the AI system and the incident or the reasonable likelihood of such a link, and, in any event, not later than 72 hours after the provider or, where applicable, the deployer becomes aware of the serious incident (Art 61(1)(1)).
Upon establishing a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, providers shall take appropriate corrective actions pursuant to Article 21 (Art 62 (1a)).
Upon receiving a notification related to a breach of obligations under Union law intended to protect fundamental rights, the national supervisory authority must inform the national public authorities or bodies referred to in Article 64(3). The Commission must develop dedicated guidance to facilitate compliance with the obligations set out in Art 62(1). That guidance shall be issued by the entry into force of the EU AI Act and shall be assessed regularly (Art 62(2)).
For high-risk AI systems referred to in Annex III that are placed on the market or put into service by providers that are subject to Union legislative instruments laying down reporting obligations equivalent to those set out in this EU AI Act, the notification of serious incidents constituting a breach of fundamental rights under Union law shall be transferred to the national supervisory authority (Art 62(3)).
Similar to the Market Surveillance Regulation (EU) 2019/1020, the EU AI Act imposes proportionate obligations on, product manufacturers (Art 24), authorised representatives (Art 25), on importers (Art 26), on distributors (Art 27), and further obligations on distributors, importers, deployers or any other third-party (Art 28, 28a, and 28b).
Art 28 (1) sets when any distributor, importer, deployer or other third-party shall be considered a provider of a high-risk AI system and be subject to the obligations of the provider under Article 16 of the EU AI Act.
Additionally Art 28(2a) requires the provider of a high risk AI system and the third party that supplies tools, services, components or processes that are used or integrated in the high risk AI system by written agreement to specify the information, capabilities, technical access, and or other assistance, based on the generally acknowledged state of the art, that the third party is required to provide in order to enable the provider of the high risk AI system fully to comply with the obligations under the EU AI Act.
The Commission must develop and recommend non-binding model contractual terms between providers of high-risk AI systems and third parties in order to assist both parties in drafting and negotiating contracts with balanced contractual rights and obligations, consistent with each party’s level of control (Art 28(2a) extract).
Art 28a (Unfair contractual terms unilaterally imposed on an SME or startup) on unfair contract terms, will apply to all new contracts entered into force after the date of entry into force of the EU AI Act. Businesses will be required to review existing contractual obligations that are subject to the EU AI Act, by three years after the date of entry into force of the AI Act (Art 28a7).
Art 28b imposes critically important obligations on providers of foundation models. These may be accessed HERE >
Deployers of high-risk AI systems must take appropriate technical and organisational measures to use such systems in accordance with the instructions of use accompanying the systems, pursuant to Art 29(2) and Art 29(5) of this Article (Art 29(1)).
To the extent deployers exercise control over the high-risk AI system, they must: (extracts only and paraphrased):
To the extent the deployer exercises control over the input data, that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose (Art 29(3)).
Deployers must monitor the operation of the high-risk AI system and when relevant, inform providers in accordance with Article 61 (explain). When the AI system presents a risk within the meaning of Article 65(1) they must, without undue delay, inform the provider or distributor and relevant national supervisory authorities and suspend the use of the system (Art 29(3)).
Deployers must also immediately inform first the provider, and then the importer or distributor and relevant national supervisory authorities when they have identified any serious incident or any malfunctioning within the meaning of Article 62 and interrupt the use of the AI system. If the deployer is not able to reach the provider, Article 62 will apply mutatis mutandis (Art 29(4)).
For deployers that are credit institutions regulated by Directive 2013/36/EU, the monitoring obligation will be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to Article 74 of that Directive (Art 29(4)(1)).
Deployers of high-risk AI systems must keep the logs automatically generated by that high-risk AI system, to the extent that such logs are under their control and are required for ensuring and demonstrating compliance for ex-post audits of any reasonably foreseeable malfunction, incidents or misuses of the system, or for ensuring and monitoring for the proper functioning of the system throughout its lifecycle. The logs must be kept for a period of at least six months. The retention period shall be in accordance with industry standards and appropriate to the intended purpose of the high-risk AI system (Art 29(5)).
Deployers that are credit institutions regulated by Directive 2013/36/EU must maintain the logs as part of the documentation concerning internal governance arrangements, under Article 74 of that Directive (Art 29(5.1)).
Prior to putting into service or use a high-risk AI system at the workplace, deployers must consult workers' representatives with a view to reaching an agreement in accordance with Directive 2002/14/EC and inform the affected employees that they will be subject to the system (Art 29(5a)).
Deployers of high-risk AI systems that are public authorities or Union institutions, bodies, offices and agencies or undertakings referred to in Article 51(1a)(b) must comply with the registration obligations referred to in Article 51 (Art 29(5b)).
Where applicable, deployers of high risk AI systems must use the information provided under Article 13 (labe) to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680 (label), a summary of which shall be published, having regard to the specific use and the specific context in which the AI system is intended to operate (Art 29(6)).
Deployers of high-risk AI systems referred to in Annex III, which make decisions or assist in making decisions related to natural persons, must inform the natural persons that they are subject to the use of the high-risk AI system. This information must include the intended purpose and the type of decisions it makes. The deployer must also inform the natural person about its right to an explanation referred to in Article 68c (Art 29(6a)).
Deployers must cooperate with the relevant national competent authorities on any action those authorities take in relation with the high-risk system in order to implement the EU AI Act (Art 29(6b)).
Conformity assessment means the process of verifying whether the requirements set out in Chapter 2 of the EU AI Act (referred to above) relating to an AI system have been fulfilled (Art 3(20)).
The European Parliament’s draft of the EU AI Act has for the purposes of the trialogue, deleted Article 19.
The European Parliament’s requirements set out in amended Articles 40 to 50 of the EU AI Act may be accessed HERE >
Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2) the provider or, where applicable, the authorised representative must register that system in the EU database referred to in Article 60, in accordance with Article 60(2) (Art 51.1).
Before putting into service or using a high-risk AI system in accordance with Article 6(2), the following categories of deployers must register the use of that AI system in the EU database referred to in Article 60: (a) deployers who are public authorities or Union institutions, bodies, offices or agencies or deployers acting on their behalf; (b) deployers who are undertakings designated as a gatekeeper under Regulation (EU) 2022/1925 (DMA)(Art 51(1a)).
Deployers who do not fall under subparagraph 1a. (Art 51.1a) will be entitled to voluntarily register the use of a high-risk AI system referred to in Article 6(2) in the EU database referred to in Article 60 (Art 51(2)).
An updated registration entry must be completed immediately following each substantial modification (Art 51(3)).
The text below represents extracts and truncated text from Art 52. The language is imprecise but is relatively clear. There are exceptions to Art 52 which are not reproduced here. The language below represents compromises by MEPs in order to reach an agreed position, prior to the start of the Trialogue.
Providers must ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person:
Users of an emotion recognition system or a biometric categorisation system (which is not prohibited under Article 5), must inform of the operation of the system, the natural persons exposed thereto and obtain their consent prior to the processing of their biometric and other personal data in accordance with Regulation (EU) 2016/679, Regulation (EU) 2016/1725 and Directive (EU) 2016/280, as applicable.
Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), must disclose that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it.
Disclosure means labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. (Article 52(3)).
Where the content forms part of an evidently creative, satirical, artistic or fictional cinematographic, video games visuals and analogous work or programme, transparency obligations set out in Article 52(3) are limited to disclosing of the existence of such generated or manipulated content in an appropriate clear and visible manner that does not hamper the display of the work and disclosing the applicable copyrights, where relevant.
The information referred to in Art 52(1) to Art 52(3) must be provided to the natural persons at the latest at the time of the first interaction or exposure. It must be accessible to vulnerable persons, such as persons with disabilities or children, complete, where relevant and appropriate, with intervention or flagging procedures for the exposed natural person (Art 52(3b)).
Member States must:
The specific interests and needs of the SMEs, start-ups and users must be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to development stage, their size, market size and market demand.
The Commission must regularly assess the certification and compliance costs for SMEs and start-ups, including through transparent consultations with SMEs, start-ups and users and shall work with Member States to lower such costs where possible. The Commission must report on these findings to the European Parliament and to the Council as part of the report on the evaluation and review of this Regulation provided for in Article 84(2) (Art 55(2)).
Art 56 establishes the ‘European Artificial Intelligence Office’ (the ‘AI Office’) which will be an independent body of the Union and the seat of which will be Brussels. It shall have legal personality.
The tasks of the AI Office include:
The AI Office will be accountable to the European Parliament and to the Council in accordance with the EU AI Act.
The Commission must, in collaboration with the Member States, set up and maintain an EU database containing information referred to in Art 60(2) concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51 (Art 60(1)).
The data listed in Annex VIII (see below) must be entered into the EU database by the providers. (Art 60(2)).
Where, having performed an evaluation under Article 65 (Procedure for dealing with AI systems presenting a risk at national level), the market surveillance authority of a Member State finds that although an AI system is in compliance with the EU AI Act, it presents a risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights or to other aspects of public interest protection, it must require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk, to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe (Art 67(1)).
The provider or other relevant operators must ensure that corrective action is taken in respect of all the AI systems concerned that they have made available on the market throughout the Union within the timeline prescribed by the market surveillance authority of the Member State referred to in paragraph 1 (Art 67(1)).
The Member State must immediately inform the Commission and the other Member States. That information shall include all available details, in particular the data necessary for the identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and duration of the national measures taken (Art 67(3)).
The Commission must without delay enter into consultation with the Member States and the relevant operator and shall evaluate the national measures taken. On the basis of the results of that evaluation, the Commission shall decide whether the measure is justified or not and, where necessary, propose appropriate measures (Art 67(4)).
Where the market surveillance authority of a Member State makes one of the following findings, it must require the relevant provider to put an end to the noncompliance concerned:
Where the non-compliance referred to in Art 68(1) persists, the Member State concerned must take all appropriate measures to restrict or prohibit the high-risk AI system being made available on the market or ensure that it is recalled or withdrawn from the market (Art 68(2)).
Without prejudice to any other administrative or judicial remedy, every natural person or groups of natural persons shall have the right to lodge a complaint with a national supervisory authority, in particular in the Member State of his or her habitual residence, place of work or place of the alleged infringement if they consider that the AI system relating to him or her infringes this Regulation (Art 68a1).
The national supervisory authority with which the complaint has been lodged must inform the complainant on the progress and the outcome of the complaint including the possibility of a judicial remedy pursuant to Article 78 (Art 68a2).
Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision of a national supervisory authority concerning them (Art 68(b1)).
Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy where the national supervisory authority which is competent pursuant to Articles 59 does not handle a complaint or does not inform the data subject within three months on the progress or outcome of the complaint lodged pursuant to Article 68a (Art 68(b2)).
Proceedings against a national supervisory authority shall be brought before the courts of the Member State where the national supervisory authority is established (Art 68(b4))).
Where proceedings are brought against a decision of a national supervisory authority which was preceded by an opinion or a decision of the Commission in the union safeguard procedure, the supervisory authority shall forward that opinion or decision to the court (Art 68(b4)).
Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system which produces legal effects or similarly significantly affects him or her in a way that they consider to adversely impact their health, safety, fundamental rights, socio-economic well-being or any other of the rights deriving from the obligations laid down in the EU AI Act, shall have the right to request from the deployer clear and meaningful explanation pursuant to Article 13(1) (Transparency and provision of information) on the role of the AI system in the decision making procedure, the main parameters of the decision taken and the related input data (Art 68(c)(1)).
‘General purpose AI system’ means an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed (Art 3(1d)).
The Commission, the AI Office and the Member States shall encourage and facilitate the drawing up of codes of conduct intended, including where they are drawn up in order to demonstrate how AI systems respect the principles set out in Article 4a and can thereby be considered trustworthy, to foster the voluntary application to AI systems other than high risk AI systems of the requirements set out in Title III, Chapter 2 on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems (Art 69(1)).
Codes of conduct intended to foster the voluntary compliance with the principles underpinning trustworthy AI systems, must, in particular:
Codes of Conduct may be drawn up by individual providers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders, including scientific researchers, and their representative organisations, in particular trade unions, and consumer organisations. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems. Providers adopting codes of conduct will designate at least one natural person responsible for internal monitoring (Art 69(3)).
The Commission and the AI Office must take into account the specific interests and needs of SMEs and start-ups when encouraging and facilitating the drawing up of codes of conduct (Art 69(4)).
© Paul Foley Law 2023. All rights Reserved. Learn more about our legal services in this area HERE >