The Charter of Fundamental Rights of the European Union (Charter), brings together the most important personal freedoms and rights enjoyed by citizens of the EU into one legally binding document. The Charter came into force in December 2009, and has the same legal power as an EU Treaty.
The relevance to artificial intelligence systems is that their specific characteristics (e.g. opacity, complexity, dependency on data, autonomous behaviour) can adversely affect a number of the fundamental rights enshrined in the Charter.
In February 2020, the European Commission (EUC) published the White Paper on “AI - A European approach to excellence and trust” and launched an online public consultation at the same time in order to garner views and opinions. Over 1200 responses on the issues raised in the consultation were received by the EUC. Following on from it, the EU published the draft Artificial Intelligence Act in April 2021 (the “AI Act”).
The draft AI Act marks the first time the EU Commission has sought to regulate all aspects of artificial intelligence. It seeks to ensure a high level of protection for the fundamental rights enshrined in the Charter (referred to above).
The AI Act will not apply to AI systems developed or used exclusively for military purposes (for which in broad terms a number of regulatory regimes already exist).
The AI Act will, subject to some exceptions, apply to:
The AI Act follows a risk-based approach, differentiating between uses of AI systems that
The AI Act proposes enormous fines, for different categories of breaches.
Art 83 sets out how the AI Act will apply to existing AI systems. Expect this to be heavily negotiated.
The AI Act will not apply to AI systems which are components of the large scale IT systems established by the legal acts listed in Annex IX that have been placed on the market or put into service before the date of application of the AI Act (referred to in Article 85(2)), unless the replacement or amendment of those legal acts leads to a significant change in the design or intended purpose of the AI system or AI systems concerned ((Art 83(1) part of)).
However, the AI Act requirements will be taken into account, where applicable, in the evaluation of each of the large-scale IT systems established by the legal acts listed in Annex IX (Article 83(1) par 2)).
Additionally, the AI Act will not apply to the high-risk AI systems, other than the ones referred to in Art 83(1), that have been placed on the market or put into service before the date of application of the AI Act (referred to in Article 85(2)), only if, from that date, those systems are subject to significant changes in their design or intended purpose (Art 83(2)).
The definition of an AI system (here) aims to be as technology neutral and as future proof as possible. In order to provide the needed legal certainty, the definition of an AI system is complemented by Annex I (Artificial Intelligence Techniques and Approaches). The definition differs from that published by the OECD, and indeed the Portuguese Presidency has proposed an amended definition of an AI system.
The list of high-risk AI systems referred to in Annex III (eight categories) contains a limited number of AI systems whose risks have already materialised or are likely to materialise in the near future.
The AI Act permits the EU Commission to adopt delegated acts:
The Commission’s powers to adopt delegated acts are conferred by Art 73 of the AI Act.
In submissions on the AI Act, these powers have all been the subject of much comment: see below under Standards.
Art 5, in truncated form, prohibits the placing on the market, the putting into service or use of an AI system:
Art 5(1)(d) prohibits the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and insofar as such use is strictly necessary for one of the following objectives:
An AI system will be considered high-risk where:
(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II; and
(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product under the Union harmonisation legislation listed in Annex II (Art 6(1)).
In addition to the high-risk AI systems referred to in Art 6(1), AI systems referred to in Annex III will also be considered high-risk (Art 6(2)).
Title III Chapter 2 (Arts 8 to 15) (Chapter 2) of the AI Act, contains complex specific rules for AI systems that create a high risk to the health and safety or fundamental rights of natural persons.
More particularly, Art 8 Compliance, Art 9 Risk management, Art 10 Data and data governance requirements, Art 11 Technical documentation, Art 12 record-keeping, Art 13 Transparency and provision of information to users, Art 14 Human oversight, and Art 15 Accuracy, robustness and cybersecurity, set out the legal requirements for high-risk AI systems.
These requirements, the Commission argue, are already state-of-the-art for many diligent operators and are the result of two years of preparatory work, derived from the Ethics Guidelines of the High-Level Expert Group on Artificial Intelligence (HLEG29), piloted by more than 350 organisations.
The substance of provider obligations are found in Art 16. Amongst the most important are:
Related to Chapter 2, obligations are placed on providers, as regards a quality management system (Art 17), to draw up technical documentation (Art 18), conformity assessment (Art 19) and requirements related to automatically generated logs (Art 20).
Providers must on request demonstrate conformity for a high risk system and must also give the national competent authority access to the logs automatically generated by the high risk system, to the extent such logs are under their control whether by virtue of a contract or by law.
Providers must establish and document a post-market monitoring system (Art 61(1)), which must be based on a template post-market monitoring plan contained in an implementing act to be adopted by the EUC (Art 61(3)).
The post-market monitoring system must amongst other things, allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2 (Art 61(2)).
Similar to the Market Surveillance Regulation (EU) 2019/1020, the AI Act imposes proportionate obligations on, product manufacturers (Art 24), authorised representatives (Art 25), on importers (Art 26), on distributors (Art 27), and further obligations on distributors, importers, users or any other third-party (Art 28). These Articles have significant implications for distributors and importers.
Currently the AI Act does not provide for individual enforcement rights. One of the amendments called for is the inclusion of an explicit right of redress for individuals.
In addition to Art 28, Art 29 places a small number of obligations on users (some argue the word users should be replaced by the word deployers) of high risk AI systems. Its drafting has been criticised for omitting many additional obligations that ought to be placed on users. We will have to see how this article evolves in negotiations.
The most important Art 29 obligation is the obligation of the user to use the information provided under Art 13 (Transparency and provision of information to users) to comply with their obligation to carry out a data protection impact assessment under Art 35 of Regulation (EU) 2016/679 (GDPR) or Art 27 of Directive (EU) 2016/680, where applicable (Art 29(6)).
Conformity assessment means the process of verifying whether the requirements set out in Chapter 2 of the AI Act (referred to above) relating to an AI system have been fulfilled (Art 3(20)).
High-risk AI systems which are in conformity with harmonised standards which have been published in the Official Journal will be presumed to be in conformity with the Chapter 2 requirements, to the extent those standards cover those requirements (Art 40).
The Commission may also adopt common specifications in respect of the Chapter 2 requirements where inter alia, relevant harmonised standards are insufficient (Art 41(1)). There is also a presumption of conformity with the requirements in Article 10(4) (Training, validation and testing data sets) for certain trained and tested high risk systems referred to in Article 42 (Art 42(1)).
The level of delegation of regulatory power in the AI Act to European standardisation organisations (ESOs), the impossibility for consumer associations and other civil society organisations to influence the development of standards, and the lack of judicial means to control them once they have been adopted, are the main criticisms that have been made of this part of the AI Act, by Ebers and others.
Instead, they recommend that the AI Act codifies a set of legally binding requirements for high-risk AI systems (e.g. prohibited forms of algorithmic discrimination), which ESOs may then specify further through harmonised standards. Furthermore, they advocate that European policy-makers should strengthen democratic oversight of the standardisation process (European Parliament briefing on the A1 Act of January 2022).
For high-risk AI systems in point 1 of Annex III, (AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons), where the provider has applied harmonised standards referred to in Article 40, or, common specifications referred to in Article 41, the provider must follow one of the following procedures:
Where harmonised standards do not exist and common specifications referred to in Article 41 are not available, the provider must follow the conformity assessment procedure set out in Annex VII (Art 43(1)).
For high-risk AI systems, to which legal acts listed in Annex II, section A, (List of Union harmonisation legislation based on the New Legislative Framework), apply, the provider must follow the relevant conformity assessment as required under those legal acts. (Art 43(3)).
The requirements set out in Chapter 2 (referred to above) will also apply to those high-risk AI systems and will be part of that assessment. Points 4.3 (The technical documentation must be examined by the notified body), 4.4 (the notified body may require that the provider supplies further evidence or carries out further tests), 4.5 (Where necessary to assess the conformity of the high-risk AI system with the Title III, Chapter 2 requirements and upon a reasoned request, the notified body must also be granted access to the source code of the AI system) and the fifth paragraph of point 4.6 (Where the AI system does not meet the requirement relating to the data used to train it etc) of Annex VII will also apply (Art43(3)).
For high-risk AI systems referred to in points 2 to 8 of Annex III, providers must follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. (Art 43(2)).
For high-risk AI systems referred to in point 5(b) (b) (AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU (CRDIV), the conformity assessment must be carried out as part of the procedure referred to in Arts 97 to 101 of that Directive (Art 43(2)).
High risk AI systems must undergo a new conformity assessment procedure whenever they are substantially modified, regardless of whether the modified system is intended to be further distributed or continues to be used by the current user (Art 43(4)).
For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, will not constitute a substantial modification (Art 43(4)).
Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2) (high risk AI systems referred to in Annex III), the provider or, where applicable, the authorised representative must register that system in the EU database referred to in Article 60.
As an addition to the requirements of Chapter 2 above, Art 52 requires certain AI systems to take account of the specific risks of manipulation they pose.
Providers must ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation must not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence (Article 52(1)).
Users of an emotion recognition system or a biometric categorisation system must inform of the operation of the system the natural persons exposed thereto. This obligation will not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences (Article 52(2)).
Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), must disclose that the content has been artificially generated or manipulated. However, there are certain specific exceptions to these disclosure obligations in the last sentence of Article 52(3) (Article 52(3)).
Member States must
The specific interests and needs of the small-scale providers must be taken into account when setting the fees for conformity assessment under Art 43, reducing those fees proportionately to their size and market size (Art 55(2)).
The Commission must, in collaboration with the Member States, set up and maintain a EU database containing information referred to in Art 60(2) concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51 (Art 60(1)).
The data listed in Annex VIII must be entered into the EU database by the providers. The Commission must provide them with technical and administrative support (Art 60(2)).
Information contained in the EU database will be accessible to the public (Art 60(3)).
The text of Annex VIII has been criticised as not requiring enough information is registered by providers. Expect further negotiation on this Annex.
Art 69(1) requires the Commission and the Member States to encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application to AI systems other than high-risk AI systems of the requirements set out in Title III, Chapter 2.
Art 56 provides for the establishment of a ‘European Artificial Intelligence Board’ (the ‘Board’)
The Board will provide advice and assistance to the Commission in the context described in the AI Act on amongst other things
There is considerable work to be done to align the AI Act with the GDPR, the draft ePrivacy Regulation, the proposed NIS2 Directive and with Regulation (EU) 2019/1150 (the EU Platforms Regulation).
The negotiation of the AI Act follows the ordinary legislative procedure (Parliament and Council on equal footing). The next stage is the publication of the draft report (11 April 2022: deadlines for amendments to the draft report by 18 May 2022), thereafter, the Committee vote, (26 or 27 October 2022), followed by a vote of the entire Parliament in November 2022 (first reading position). These dates are approximate. The Council needs to adopt their first position. This is followed by the trilogue. Then a second reading in the Parliament. The AI Act is expected to apply from 2025 onwards if adopted.
Copyright © Paul Foley March 2022 - All Rights Reserved.
Owner, Paul Foley Law
For legal services and advice use the Contact page or Email: paul@paulfoleylaw.ie