paul@paulfoleylaw.ie
22 Northumberland Road, Dublin D04 ED73, Ireland, EU
INTRO
INSIGHTS

The EU Artificial Intelligence Act

By
Paul Foley
Key points for AI Systems developers and providers

The Charter of Fundamental Rights of the European Union (Charter), brings together the most important personal freedoms and rights enjoyed by citizens of the EU into one legally binding document. The Charter came into force in December 2009, and has the same legal power as an EU Treaty.

The relevance to artificial intelligence systems is that their specific characteristics (e.g. opacity, complexity, dependency on data, autonomous behaviour) can adversely affect a number of the fundamental rights enshrined in the Charter.

In February 2020, the European Commission (EUC) published the White Paper on “AI - A European approach to excellence and trust” and launched an online public consultation at the same time in order to garner views and opinions. Over 1200 responses on the issues raised in the consultation were received by the EUC. Following on from it, the EU published the draft Artificial Intelligence Act in April 2021 (the “AI Act”).

The AI Act

The draft AI Act marks the first time the EU Commission has sought to regulate all aspects of artificial intelligence. It seeks to ensure a high level of protection for the fundamental rights enshrined in the Charter (referred to above).

The AI Act will not apply to AI systems developed or used exclusively for military purposes (for which in broad terms a number of regulatory regimes already exist).

The AI Act will, subject to some exceptions, apply to:

  1. providers placing on the market or putting into service, AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;
  2. users of AI systems located within the Union;
  3. providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union (Art 2(1)).

What the AI Act proposes

The AI Act follows a risk-based approach, differentiating between uses of AI systems that

  1. create an unacceptable risk and are thus prohibited (Art 5);
  2. create a high risk (Art 6);
  3. interact with humans, or are used to determine emotions or determine association with (social) categories based on biometric data or that generate or manipulate content (deep fakes) (Art 52);
  4. those that create low or minimal risk.

Fines (Art 71)

The AI Act proposes enormous fines, for different categories of breaches.

Existing AI systems (Art 83)

Art 83 sets out how the AI Act will apply to existing AI systems. Expect this to be heavily negotiated.

The AI Act will not apply to AI systems which are components of the large scale IT systems established by the legal acts listed in Annex IX  that have been placed on the market or put into service before the date of application of the AI Act (referred to in Article 85(2)), unless the replacement or amendment of those legal acts leads to a significant change in the design or intended purpose of the AI system or AI systems concerned ((Art 83(1) part of)).

However, the AI Act requirements will be taken into account, where applicable, in the evaluation of each of the large-scale IT systems established by the legal acts listed in Annex IX (Article 83(1) par 2)).

Additionally, the AI Act will not apply to the high-risk AI systems, other than the ones referred to in Art 83(1), that have been placed on the market or put into service before the date of application of the AI Act (referred to in Article 85(2)), only if, from that date, those systems are subject to significant changes in their design or intended purpose (Art 83(2)).

AI systems and changes to the AI Act

The definition of an AI system (here) aims to be as technology neutral and as future proof as possible. In order to provide the needed legal certainty, the definition of an AI system is complemented by Annex I (Artificial Intelligence Techniques and Approaches). The definition differs from that published by the OECD, and indeed the Portuguese Presidency has proposed an amended definition of an AI system.

The list of high-risk AI systems referred to in Annex III (eight categories) contains a limited number of AI systems whose risks have already materialised or are likely to materialise in the near future.

The AI Act permits the EU Commission to adopt delegated acts:

  1. to update Annex I (Art 4);
  2. to update the list in Annex III (Art 7);
  3. to amend Annex IV (Art 11(3));
  4. to update Annexes VI and Annex VII (Art 43(5));
  5. to amend Art 43(1) and (2), in order to subject AI systems in point 2 to point 8 of Annex III to the conformity assessment procedure in Annex VII (or parts thereof) (Art 43(6)); and
  6. for the purpose of updating the content of the EU declaration of conformity set out in Annex V (Art 48(5)).

The Commission’s powers to adopt delegated acts are conferred by Art 73 of the AI Act.

In submissions on the AI Act, these powers have all been the subject of much comment: see below under Standards.

Unacceptable risk (prohibited practices) (Art 5)

Art 5, in truncated form, prohibits the placing on the market, the putting into service or use of an AI system:

  1. that deploys subliminal techniques in order to materially distort a person’s behaviour;
  2. that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability;
  3. by public authorities for the evaluation or classification of the trustworthiness of natural persons leading to types of detrimental or unfavourable treatment; (essentially prohibits AI-based social scoring for general purposes by public authorities) ((Art 5 (1)(a) to (c)).

Art 5(1)(d) prohibits the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and insofar as such use is strictly necessary for one of the following objectives: 

  1. the targeted search for specific potential victims of crime, including missing children;
  2. the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;
  3. the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence as set out in Art 5(1)(d).

High risk systems (Art 6)

An AI system will be considered high-risk where:

(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II; and

(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product under the Union harmonisation legislation listed in Annex II (Art 6(1)).

In addition to the high-risk AI systems referred to in Art 6(1), AI systems referred to in Annex III  will also be considered high-risk (Art 6(2)).

The Legal requirements for high risk AI systems (Chapter 2 of Title III)

Title III Chapter 2 (Arts 8 to 15) (Chapter 2) of the AI Act, contains complex specific rules for AI systems that create a high risk to the health and safety or fundamental rights of natural persons.

More particularly, Art 8 Compliance, Art 9 Risk management, Art 10 Data and data governance requirements, Art 11 Technical documentation, Art 12 record-keeping, Art 13 Transparency and provision of information to users, Art 14 Human oversight,  and Art 15 Accuracy, robustness and cybersecurity, set out the legal requirements for high-risk AI systems.

These requirements, the Commission argue, are already state-of-the-art for many diligent operators and are the result of two years of preparatory work, derived from the Ethics Guidelines of the High-Level Expert Group on Artificial Intelligence (HLEG29), piloted by more than 350 organisations.

The obligations of providers of high-risk AI systems (Art 16 and other Arts)

The substance of provider obligations are found in Art 16. Amongst the most important are:

  • ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 (as above) (Art 16)(a));
  • ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service (Art 16(e));
  • comply with the registration obligations referred to in Art 51 (Art 16(f))
  • take the necessary corrective actions, if the high-risk AI system is not in conformity with the requirements set out in Chapter 2 (Art 16(g));
  • inform the relevant national competent authorities of the Member States in which they made the AI system available or put it into service and, where applicable, the notified body of the non-compliance and of any corrective actions taken (Art 16(h));
  • to affix the CE marking to their high-risk AI systems to indicate the conformity with the AI Act in accordance with Art 49 (Article 16(i))
  • upon request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 referred to above (Art 16(j)).

Related to Chapter 2, obligations are placed on providers, as regards a quality management system (Art 17), to draw up technical documentation (Art 18), conformity assessment (Art 19) and requirements related to automatically generated logs (Art 20).

Co-operation obligations of providers (Art 23)

Providers must on request demonstrate conformity for a high risk system and must also give the national competent authority access to the logs automatically generated by the high risk system, to the extent such logs are under their control whether by virtue of a contract or by law.

Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems (Art 61)

Providers must establish and document a post-market monitoring system (Art 61(1)), which must be based on a template post-market monitoring plan contained in an implementing act to be adopted by the EUC (Art 61(3)).

The post-market monitoring system must amongst other things, allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2 (Art 61(2)).

Obligations of product manufacturers and others (Art 24 to Art 28)

Similar to the Market Surveillance Regulation (EU) 2019/1020, the AI Act imposes proportionate obligations on, product manufacturers (Art 24), authorised representatives (Art 25), on importers (Art 26), on distributors (Art 27), and further obligations on distributors, importers, users or any other third-party (Art 28). These Articles have significant implications for distributors and importers.

Currently the AI Act does not provide for individual enforcement rights. One of the amendments called for is the inclusion of an explicit right of redress for individuals.

Obligations of users of high-risk AI systems (Art 29)

In addition to Art 28, Art 29 places a small number of obligations on users (some argue the word users should be replaced by the word deployers) of high risk AI systems. Its drafting has been criticised for omitting many additional obligations that ought to be placed on users. We will have to see how this article evolves in negotiations.

The most important Art 29 obligation is the obligation of the user to use the information provided under Art 13 (Transparency and provision of information to users) to comply with their obligation to carry out a data protection impact assessment under Art 35 of Regulation (EU) 2016/679 (GDPR) or Art 27 of Directive (EU) 2016/680, where applicable (Art 29(6)).

Conformity assessment - what is it and what is required by the AI Act (Arts 19, 39 to 51)

Conformity assessment means the process of verifying whether the requirements set out in Chapter 2 of the AI Act (referred to above) relating to an AI system have been fulfilled (Art 3(20)).

Standards and Common Specifications (Arts 40 and 41)

High-risk AI systems which are in conformity with harmonised standards which have been published in the Official Journal will be presumed to be in conformity with the Chapter 2 requirements, to the extent those standards cover those requirements (Art 40).

The Commission may also adopt common specifications in respect of the Chapter 2 requirements where inter alia, relevant harmonised standards are insufficient (Art 41(1)). There is also a presumption of conformity with the requirements in Article 10(4) (Training, validation and testing data sets) for certain trained and tested high risk systems referred to in Article 42 (Art 42(1)).  

The level of delegation of regulatory power in the AI Act to European standardisation organisations (ESOs), the impossibility for consumer associations and other civil society organisations to influence the development of standards, and the lack of judicial means to control them once they have been adopted, are the main criticisms that have been made of this part of the AI Act, by Ebers and others.

Instead, they recommend that the AI Act codifies a set of legally binding requirements for high-risk AI systems (e.g. prohibited forms of algorithmic discrimination), which ESOs may then specify further through harmonised standards. Furthermore, they advocate that European policy-makers should strengthen democratic oversight of the standardisation process (European Parliament briefing on the A1 Act of January 2022).

Conformity Assessment (Art 43)

For high-risk AI systems in point 1 of Annex III, (AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons), where the provider has applied harmonised standards referred to in Article 40, or, common specifications referred to in Article 41, the provider must follow one of the following procedures:

  1. the conformity assessment procedure based on internal control referred to in Annex VI;
  2. the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII (Art 43(1)).

Where harmonised standards do not exist and common specifications referred to in Article 41 are not available, the provider must follow the conformity assessment procedure set out in Annex VII (Art 43(1)).

For high-risk AI systems, to which legal acts listed in Annex II, section A, (List of Union harmonisation legislation based on the New Legislative Framework), apply, the provider must follow the relevant conformity assessment as required under those legal acts. (Art 43(3)).

The requirements set out in Chapter 2 (referred to above) will also apply to those high-risk AI systems and will be part of that assessment. Points 4.3 (The technical documentation must be examined by the notified body), 4.4 (the notified body may require that the provider supplies further evidence or carries out further tests), 4.5 (Where necessary to assess the conformity of the high-risk AI system with the Title III, Chapter 2 requirements and upon a reasoned request, the notified body must also be granted access to the source code of the AI system) and the fifth paragraph of point 4.6 (Where the AI system does not meet the requirement relating to the data used to train it etc)  of Annex VII will also apply (Art43(3)).

High-risk AI systems Annex III (Art 43)

For high-risk AI systems referred to in points 2 to 8 of Annex III, providers must follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. (Art 43(2)).

For high-risk AI systems referred to in point 5(b) (b) (AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU (CRDIV), the conformity assessment must be carried out as part of the procedure referred to in Arts 97 to 101 of that Directive (Art 43(2)).

Substantially modified high-risk AI systems (Art 43)

High risk AI systems must undergo a new conformity assessment procedure whenever they are substantially modified, regardless of whether the modified system is intended to be further distributed or continues to be used by the current user (Art 43(4)).

For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, will not constitute a substantial modification (Art 43(4)).

Registration (Art 51)

Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2) (high risk AI systems referred to in Annex III), the provider or, where applicable, the authorised representative must register that system in the EU database referred to in Article 60.

Transparency obligations for certain AI systems (Art 52)

As an addition to the requirements of Chapter 2 above, Art 52 requires certain AI systems to take account of the specific risks of manipulation they pose.

Providers must ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation must not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence (Article 52(1)).

Users of an emotion recognition system or a biometric categorisation system must  inform of the operation of the system the natural persons exposed thereto. This obligation will not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences (Article 52(2)).

Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), must disclose that the content has been artificially generated or manipulated. However, there are certain specific exceptions to these disclosure obligations in the last sentence of Article 52(3) (Article 52(3)).

Measures for small-scale providers and users (Art 55)

Member States must

  1. provide small-scale providers and start-ups with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions;
  2. organise specific awareness raising activities about the application of this AI Act, tailored to the needs of the small-scale providers and users;
  3. where appropriate, establish a dedicated channel for communication with small-scale providers and user and other innovators to provide guidance and respond to queries about the implementation of this AI Act (Art 55(1)).

The specific interests and needs of the small-scale providers must be taken into account when setting the fees for conformity assessment under Art 43, reducing those fees proportionately to their size and market size (Art 55(2)).

Registration (Art 60)

The Commission must, in collaboration with the Member States, set up and maintain a EU database containing information referred to in Art 60(2) concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51 (Art 60(1)).

The data listed in Annex VIII must be entered into the EU database by the providers. The Commission must provide them with technical and administrative support (Art 60(2)).

Information contained in the EU database will be accessible to the public (Art 60(3)).

The text of Annex VIII has been criticised as not requiring enough information is registered by providers. Expect further negotiation on this Annex.

Systems other than high risk AI systems (Art 69)

Art 69(1) requires the Commission and the Member States to encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application to AI systems other than high-risk AI systems of the requirements set out in Title III, Chapter 2.

Establishment of the European Artificial Intelligence Board (Art 56 and Art 58)

Art 56 provides for the establishment of a ‘European Artificial Intelligence Board’ (the ‘Board’)

The Board will provide advice and assistance to the Commission in the context described in the AI Act on amongst other things

  1. technical specifications or existing standards regarding the requirements set out in Title III, Chapter 2;
  2. on the use of harmonised standards or common specifications referred to in Articles 40 and 41 (Art 58 part of).

Alignment with other EU Legislation

There is considerable work to be done to align the AI Act with the GDPR, the draft ePrivacy Regulation, the proposed NIS2 Directive and with Regulation (EU) 2019/1150 (the EU Platforms Regulation).

Current Status

The negotiation of the AI Act follows the ordinary legislative procedure (Parliament and Council on equal footing). The next stage is the publication of the draft report (11 April 2022: deadlines for amendments to the draft report by 18 May 2022), thereafter, the Committee vote, (26 or 27 October 2022), followed by a vote of the entire Parliament in November 2022 (first reading position). These dates are approximate. The Council needs to adopt their first position. This is followed by the trilogue. Then a second reading in the Parliament. The AI Act is expected to apply from 2025 onwards if adopted.


Copyright © Paul Foley March 2022 - All Rights Reserved.

Owner, Paul Foley Law

For legal services and advice use the Contact page or Email: paul@paulfoleylaw.ie

map-markerenvelopetagarrow-left linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram