paul@paulfoleylaw.ie
22 Northumberland Road, Dublin D04 ED73, Ireland, EU
INTRO
INSIGHTS

Q&A: EU Artificial Intelligence (AI) Act July 2023

By
Paul Foley
July 2023: Negotiations on the ground-breaking EU AI Act are in their final stages. In this comprehensive Q&A, we take you through the key provisions impacting compliance.

Our Legal Services on procuring, using and adopting AI may be accessed HERE >


1

What is the current status of the EU AI Act?

The European Parliament (“EP”) adopted a heavily revised (by the EP) draft of the EU AI Act, on the 14th of June 2023. The EP draft of the EU AI Act, constituted their published position for the purposes of the trialogue negotiations with the Council and the Commission. Agreement on and adoption of the EU AI Act is expected by end of 2023 at the latest.

This Q&A reflects the EP version of the EU AI Act.

Note: words in bold below (excluding titles) are for the most part defined or explained in the EU AI Act. For space reasons they are not defined here.

2

What is the subject matter of the EU AI Act?

The EU AI Act proposes a risk-based approach, differentiating between AI systems or uses of AI systems that:

  1. create an unacceptable risk and are thus prohibited (Art 5);

  2. create a high risk (Art 6) and so are subject to specific requirements;

  3. interact with natural persons, emotion recognition systems and biometric categorisation systems and AI systems used to generate or manipulate image, audio or video content (Art 52);

  4. are AI systems other than high risk systems that create low or minimal risk (Art 69);

  5. compliant AI systems which present a risk (Art 67).

3

What is the scope of the EU AI Act for providers and deployers? (Art 2)

As a result of the EP changes, the scope of the EU AI Act is wider. Suppliers in third countries need to especially note what is said below. 

The EU AI Act now, subject to some exceptions, applies to:

  • providers placing on the market or putting into service, AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country (Art 2(1)(a));

  • deployers of AI systems that have their place of establishment or who are located within the Union (Art 2(1)(b));

  • providers and deployers of AI systems that have their place of establishment or who are located in a third country, where either Member State law applies by virtue of a public international law or the output produced by the AI system is intended to be used in the Union (Art 2(1)(c));

  • providers placing on the market or putting into service AI systems referred to in Art 5 (prohibited AI systems) outside the Union where the provider or distributor of such systems is located within the Union (Art 2(1)(ca));

  • importers and distributors of AI systems, as well as authorised representatives of providers of AI systems, where such importers, distributors or authorised representatives have their establishment or are located in the Union (Art 2(1)(cb));

  • affected persons that are located in the Union and whose health, safety or fundamental rights are adversely impacted by the use of an AI system that is placed on the market or put into service within the Union (Art 2(1)(cc) (part of)).

For high-risk AI systems that are safety components of products or systems, or which are themselves products or systems and that fall, within the scope of harmonisation legislation listed in Annex II - Section B, Annex II (Section B - List of other Union harmonisation legislation); only Article 84 of the EU AI Act will apply (Art 2(2)).

4

What is an AI system? (Art 3)

The EU AI Act now incorporates substantially the OECD definition of an AI system as follows:

'Artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments (Article 3(1) of the EU AI Act).

The original definition was complemented by text in Annex I (Artificial Intelligence Techniques and Approaches).  However the EP draft of the EU AI Act has deleted the Annex I text.

5

Which general principles apply to all AI systems? (Art 4a)

A new Article 4a (general principles applicable to all AI systems) require operators to make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles of (a) human agency and oversight (b) technical robustness and safety (c) privacy and data governance (d) transparency (e) diversity, non-discrimination and fairness and (f) social and environmental well-being. The Article explains what these terms mean.

Recital (9a) of the EU AI Act provides background to this drafting by reference to the Charter of Fundamental Rights of the EU.

AI literacy (Art 4 b)

Additionally, a new Article 4b (AI literacy) imposes obligations on providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.

The EU AI Act explains that AI literacy, as defined, allows providers, users and affected persons, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause (Recital 9c truncated).

The actors involved in the standardisation process must take into account the Article 4(a) EU IA Act, general principles, in accordance with Art 5 (Stakeholder participation in European standardisation) Article 6 (Access of SMEs to standards) Article 7 (Participation of public authorities in European standardisation) of Regulation (EU) No 1025/2012 (Standardisation Regulation) (Article 40(1c) truncated).

6

Which AI systems or uses of AI systems are prohibited?

In a heavily revised Art 5 (extracts only and paraphrased below), Art 5 now prohibits (subject to exceptions which not reproduced here) the placing on the market, the putting into service or use of an AI system:

  • that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision (Art 5(1)(a));

  • that exploits any of the vulnerabilities of a person or a specific group of persons, including characteristics of such person’s or a such group’s known or predicted personality traits or social or economic situation age, physical or mental ability in a manner that causes or is likely to cause that person or another person significant harm (Art 5(1)(b));

  • biometric categorisation systems that categorise natural persons according to sensitive or protected attributes or characteristics or based on the inference of those attributes or characteristics (Art 5(1)(ba));

  • for the social scoring evaluation or classification of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment, to them or either of them that are unrelated to the contexts in which the data was originally generated or collected (Art 5(1)(c));

  • the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces (Art 5(1)(d));

  • for making risk assessments of natural persons or groups thereof in order to assess the risk of a natural person for offending or re-offending or for predicting the occurrence or reoccurrence of an actual or potential criminal or administrative offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons (Art 5(1)(da));

  • that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage (Art 5(1)(db));
  • to infer emotions of a natural person in the areas of law enforcement, border management, in workplace and education institutions (Art 5(1)(dc));
  • for the analysis of recorded footage of publicly accessible spaces through ‘post’ remote biometric identification systems, unless they are subject to a pre-judicial authorisation in accordance with Union law and strictly necessary for the targeted search connected to a specific serious criminal offense as defined in Article 83(1) of TFEU that already took place for the purpose of law enforcement (Art 5(1)(dd)).

This Article will not affect the prohibitions that apply where an artificial intelligence practice infringes another Union law, including Union law on data protection, non discrimination, consumer protection or competition (Art 5(1)(a)).

7

What is a high risk AI system? (Art 6)

An AI system will be considered high-risk where:

  1. the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II (Section A – List of Union harmonisation legislation based on the New Legislative Framework. Section B - List of other Union harmonisation legislation);

  2. the product whose safety component under point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment related to risks for health and safety, with a view to the placing on the market or putting into service of that product under the Union harmonisation law listed in Annex II (Art 6(1)).

Such high risk AI products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft, equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in-vitro diagnostic medical devices (Recital 30 truncated and paraphrased).

In addition to the high-risk AI systems referred to in Art 6(1), AI systems falling under one or more of the critical areas and use cases referred to in Annex III will be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Where an AI system falls under Annex III point 2, it will be considered to be high-risk if it poses a significant risk of harm to the environment (Art 6(2)).

The Commission must, six months prior to the entry into force of the EU AI Act, after consulting the AI Office and relevant stakeholders, provide guidelines clearly specifying the circumstances where the output of AI systems referred to in Annex III would pose a significant risk of harm to the health, safety or fundamental rights of natural persons or cases in which it would not (Article 6(2)).

Where providers falling under one or more of the critical areas and use cases referred to in Annex III consider that their AI system does not pose a significant risk as described in Art 6(2), they must submit a reasoned notification to the national supervisory authority that they are not subject to the requirements of Title III Chapter 2 of the EU AI Act (Article 6(2a)).

Where the AI system is intended to be used in two or more Member States, that notification must be addressed to the AI Office. Without prejudice to Article 65 (Procedure for dealing with AI systems presenting a risk at national level), the national supervisory authority must review and reply to the notification, directly or via the AI Office, within three months if they deem the AI system to be misclassified (Art 6(2a)).

Providers that mis-classify their AI system as not subject to the requirements of Title III Chapter 2 (Arts 8 to 15) and place it on the market before the deadline for objection by national supervisory authorities, will be subject to fines under Article 71 (Article 6(2)(b)).

National supervisory authorities must submit a yearly report to the AI Office detailing the number of notifications received, the related high risk areas at stake and the decisions taken concerning received notification (Article 6(2)(c)).

8

Which high risk AI systems are referred to in Annex III?

Annex III has been extensively revised by the European Parliament and the revised text is accessible HERE >. Note the additions to Annex III now include:

AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistic point of view.

AI systems intended to be used by social media platforms that have been designated as very large online platforms within the meaning of Article 33 of Regulation EU 2022/2065 (DSA) in their recommender systems to recommend to the recipient of the service user-generated content available on the platform.

9

What are the requirements for high risk AI systems?

Title III Chapter 2 (Arts 8 to 15) (Chapter 2) of the AI Act, contains complex specific rules for AI systems that create a high risk to the health and safety or fundamental rights of natural persons.

More particularly, Art 8 Compliance, Art 9 Risk management system, Art 10 Data and data governance, Art 11 Technical documentation, Art 12 record-keeping, Art 13 Transparency and provision of information to users, Art 14 Human Oversight, and Art 15 Accuracy, robustness and cybersecurity, set out the legal requirements for high-risk AI systems.

These requirements, the Commission argue, are already state-of-the-art for many diligent operators and are the result of two years of preparatory work, derived from the Ethics Guidelines of the High-Level Expert Group on Artificial Intelligence (HLEG29), piloted by more than 350 organisations.

Note Article 10 (Data and data governance) has been substantially revised by the European Parliament and is essential reading for providers or deployers of high risk AI systems. A summary of it may be accessed HERE >.

10

What are the obligations of providers, deployers and others of high-risk AI systems? (Art 16)

The substance of provider obligations is found in Art 16. Amongst the most important provider obligations are to:

  • ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 (Arts 8 to 15) (as above) prior to placing on the market or putting into service (Art 16)(1)(a));
  • indicate their name, registered trade name or registered trade mark, and their address and contact information on the high-risk AI system or, where that is not possible, on its accompanying documentation, as appropriate (Art 16(1)(aa)).
  • ensure that natural persons to whom human oversight of high-risk AI systems is assigned are specifically made aware of the risk of automation or confirmation bias; (Art 16(1)(ab))
  • provide specifications for the input data, or any other relevant information in terms of the datasets used, including their limitation and assumptions, taking into account the intended purpose and the foreseeable and reasonably foreseeable misuses of the AI system (Art 16(1)(ac)).
  • ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service in accordance with Art 43 (Art 16(1)(e));

  • comply with the registration obligations (see below) referred to in Art 51 (Art 16(1)(f));

  • take the necessary corrective actions as referred to in Article 21 and provide information in that regard (Art 16(1)(g));
  • upon request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of Title III (Art 16(1)(j)).

Additionally, obligations are placed on providers of high risk AI systems, as regards a quality management system (Art 17), to draw up technical documentation (Art 18), conformity assessment (Art [19][43]) to keep logs automatically generated for at least six months (Art 20) take corrective actions (as set out) (Art 21) and where the high risk AI system presents a risk, within the meaning of Art 65(1), immediately inform the national competent authorities of the Member States in which it made the system available and, where applicable, the notified body that issued a certificate for the high-risk AI system in particular of the non-compliance and of any corrective actions taken (Art 22 Duty of information).

11

What are the co-operation obligations of providers and deployers to the AI Office and the Commission? (Art 23)

Providers and where applicable, deployers of high-risk AI systems must, upon a reasoned request by a national competent authority (NCA) or where applicable, by the AI Office or the Commission, provide them with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of Title III.

Upon a reasoned request by an NCA or, where applicable, by the Commission, providers and, where applicable, deployers must also give the requesting national competent authority or the Commission, as applicable, access to the logs automatically generated by the high-risk AI system, to the extent such logs are under their control.

12

What are the obligations of providers as regards establishing post-market monitoring system? (Art 61)

Providers and deployers must establish and document a post-market monitoring system (Art 61(1)), which must be based on a template post-market monitoring plan contained in an implementing act to be adopted by the EU Commission (Art 61(3) extract).

The post-market monitoring system must amongst other things, allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2 (Art 61(2) extract).

13

What are the provider reporting obligations in respect of serious incidents and of malfunctioning? (Art 62)

Providers and, where deployers have identified a serious incident, deployers of high-risk AI systems placed on the Union market must report any serious incident of those AI systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the national supervisory authority of the Member States where that incident or breach occurred (Art 61(1)).

Such notification must be made without undue delay after the provider, or, where applicable the deployer, has established a causal link between the AI system and the incident or the reasonable likelihood of such a link, and, in any event, not later than 72 hours after the provider or, where applicable, the deployer becomes aware of the serious incident (Art 61(1)(1)).

Upon establishing a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, providers shall take appropriate corrective actions pursuant to Article 21 (Art 62 (1a)).

Upon receiving a notification related to a breach of obligations under Union law intended to protect fundamental rights, the national supervisory authority must inform the national public authorities or bodies referred to in Article 64(3). The Commission must develop dedicated guidance to facilitate compliance with the obligations set out in Art 62(1). That guidance shall be issued by the entry into force of the EU AI Act and shall be assessed regularly (Art 62(2)).

For high-risk AI systems referred to in Annex III that are placed on the market or put into service by providers that are subject to Union legislative instruments laying down reporting obligations equivalent to those set out in this EU AI Act, the notification of serious incidents constituting a breach of fundamental rights under Union law shall be transferred to the national supervisory authority (Art 62(3)).

14

What are the obligations of product manufacturers and others? (Art 24 to Art 28)

Similar to the Market Surveillance Regulation (EU) 2019/1020, the EU AI Act imposes proportionate obligations on, product manufacturers (Art 24), authorised representatives (Art 25), on importers (Art 26), on distributors (Art 27), and further obligations on distributors, importers, deployers or any other third-party (Art 28, 28a, and 28b).

Art 28 (1) sets when any distributor, importer, deployer or other third-party shall be considered a provider of a high-risk AI system and be subject to the obligations of the provider under Article 16 of the EU AI Act.

Additionally Art 28(2a) requires the provider of a high risk AI system and the third party that supplies tools, services, components or processes that are used or integrated in the high risk AI system by written agreement to specify the information, capabilities, technical access, and or other assistance, based on the generally acknowledged state of the art, that the third party is required to provide in order to enable the provider of the high risk AI system fully to comply with the obligations under the EU AI Act.

The Commission must develop and recommend non-binding model contractual terms between providers of high-risk AI systems and third parties in order to assist both parties in drafting and negotiating contracts with balanced contractual rights and obligations, consistent with each party’s level of control (Art 28(2a) extract).

Art 28a (Unfair contractual terms unilaterally imposed on an SME or startup) on unfair contract terms, will apply to all new contracts entered into force after the date of entry into force of the EU AI Act. Businesses will be required to review existing contractual obligations that are subject to the EU AI Act, by three years after the date of entry into force of the AI Act (Art 28a7).

15

What are the obligations of providers of foundation models? (Art 28b)

Art 28b imposes critically important obligations on providers of foundation models. These may be accessed HERE >

16

What are the deployer obligations in respect of high-risk AI systems? (Art 29 and 29a)

Deployers of high-risk AI systems must take appropriate technical and organisational measures to use such systems in accordance with the instructions of use accompanying the systems, pursuant to Art 29(2) and Art 29(5) of this Article (Art 29(1)).

To the extent deployers exercise control over the high-risk AI system, they must: (extracts only and paraphrased):

  1. implement human oversight;

  2. ensure that the natural persons assigned to ensure human oversight are competent, properly qualified and trained;

  3. ensure that relevant and appropriate robustness and cybersecurity measures are regularly monitored for effectiveness and are regularly updated (Art 29(1a)).

To the extent the deployer exercises control over the input data, that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose (Art 29(3)).

Deployers must monitor the operation of the high-risk AI system and when relevant, inform providers in accordance with Article 61 (explain). When the AI system presents a risk within the meaning of Article 65(1) they must, without undue delay, inform the provider or distributor and relevant national supervisory authorities and suspend the use of the system (Art 29(3)).

Deployers must also immediately inform first the provider, and then the importer or distributor and relevant national supervisory authorities when they have identified any serious incident or any malfunctioning within the meaning of Article 62 and interrupt the use of the AI system. If the deployer is not able to reach the provider, Article 62 will apply mutatis mutandis (Art 29(4)).

For deployers that are credit institutions regulated by Directive 2013/36/EU, the monitoring obligation  will be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to Article 74 of that Directive (Art 29(4)(1)).

Deployers of high-risk AI systems must keep the logs automatically generated by that high-risk AI system, to the extent that such logs are under their control and are required for ensuring and demonstrating compliance for ex-post audits of any reasonably foreseeable malfunction, incidents or misuses of the system, or for ensuring and monitoring for the proper functioning of the system throughout its lifecycle. The logs must be kept for a period of at least six months. The retention period shall be in accordance with industry standards and appropriate to the intended purpose of the high-risk AI system (Art 29(5)).

Deployers that are credit institutions regulated by Directive 2013/36/EU must maintain the logs as part of the documentation concerning internal governance arrangements, under Article 74 of that Directive (Art 29(5.1)).

Prior to putting into service or use a high-risk AI system at the workplace, deployers must consult workers' representatives with a view to reaching an agreement in accordance with Directive 2002/14/EC and inform the affected employees that they will be subject to the system (Art 29(5a)).

Deployers of high-risk AI systems that are public authorities or Union institutions, bodies, offices and agencies or undertakings referred to in Article 51(1a)(b) must comply with the registration obligations referred to in Article 51 (Art 29(5b)).

Where applicable, deployers of high risk AI systems must use the information provided under Article 13 (labe) to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680 (label), a summary of which shall be published, having regard to the specific use and the specific context in which the AI system is intended to operate (Art 29(6)).

Deployers of high-risk AI systems referred to in Annex III, which make decisions or assist in making decisions related to natural persons, must inform the natural persons that they are subject to the use of the high-risk AI system. This information must include the intended purpose and the type of decisions it makes. The deployer must also inform the natural person about its right to an explanation referred to in Article 68c (Art 29(6a)).

Deployers must cooperate with the relevant national competent authorities on any action those authorities take in relation with the high-risk system in order to implement the EU AI Act (Art 29(6b)).

17

Conformity assessment: what is it and what is required by the AI Act? (Arts 19, 39 to 50)

Conformity assessment means the process of verifying whether the requirements set out in Chapter 2 of the EU AI Act (referred to above) relating to an AI system have been fulfilled (Art 3(20)).

The European Parliament’s draft of the EU AI Act has for the purposes of the trialogue, deleted Article 19.

The European Parliament’s requirements set out in amended Articles 40 to 50 of the EU AI Act may be accessed HERE >

18
Is registration required? (Art 51)

Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2) the provider or, where applicable, the authorised representative must register that system in the EU database referred to in Article 60, in accordance with Article 60(2) (Art 51.1).

Before putting into service or using a high-risk AI system in accordance with Article 6(2), the following categories of deployers must register the use of that AI system in the EU database referred to in Article 60: (a) deployers who are public authorities or Union institutions, bodies, offices or agencies or deployers acting on their behalf; (b) deployers who are undertakings designated as a gatekeeper under Regulation (EU) 2022/1925 (DMA)(Art 51(1a)).

Deployers who do not fall under subparagraph 1a. (Art 51.1a) will be entitled to voluntarily register the use of a high-risk AI system referred to in Article 6(2) in the EU database referred to in Article 60 (Art 51(2)).

An updated registration entry must be completed immediately following each substantial modification (Art 51(3)).

19
What are the transparency obligations for certain AI systems? (Art 52)

The text below represents extracts and truncated text from Art 52. The language is imprecise but is relatively clear. There are exceptions to Art 52 which are not reproduced here. The language below represents compromises by MEPs in order to reach an agreed position, prior to the start of the Trialogue.

Providers must ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person:

  1. that he or she is interacting with an AI system

  2. of the functions that are AI enabled

  3. whether there is human oversight and who is responsible for the decision-making process

  4. or his or her representative of the right to object against the application of such systems to them

  5. of his or her right to seek judicial redress against decisions taken by or harm caused by such AI system (Art 52(1)).

Users of an emotion recognition system or a biometric categorisation system (which is not prohibited under Article 5), must inform of the operation of the system, the natural persons exposed thereto and obtain their consent prior to the processing of their biometric and other personal data in accordance with Regulation (EU) 2016/679, Regulation (EU) 2016/1725 and Directive (EU) 2016/280, as applicable.

Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), must disclose that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it.

Disclosure means labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. (Article 52(3)).

Where the content forms part of an evidently creative, satirical, artistic or fictional cinematographic, video games visuals and analogous work or programme, transparency obligations set out in Article 52(3) are limited to disclosing of the existence of such generated or manipulated content in an appropriate clear and visible manner that does not hamper the display of the work and disclosing the applicable copyrights, where relevant.

The information referred to in Art 52(1) to Art 52(3) must be provided to the natural persons at the latest at the time of the first interaction or exposure. It must be accessible to vulnerable persons, such as persons with disabilities or children, complete, where relevant and appropriate, with intervention or flagging procedures for the exposed natural person (Art 52(3b)).

20
What are the measures for small-scale providers and users? (Art 55)

Member States must:

  1. provide small-scale providers and start-ups with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions;

  2. organise specific awareness raising activities about the application of this AI Act, tailored to the needs of the small-scale providers and users;

  3. where appropriate, establish a dedicated channel for communication with small-scale providers and user and other innovators to provide guidance and respond to queries about the implementation of this AI Act (Art 55(1)).

The specific interests and needs of the SMEs, start-ups and users must be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to development stage, their size, market size and market demand.

The Commission must regularly assess the certification and compliance costs for SMEs and start-ups, including through transparent consultations with SMEs, start-ups and users and shall work with Member States to lower such costs where possible. The Commission must report on these findings to the European Parliament and to the Council as part of the report on the evaluation and review of this Regulation provided for in Article 84(2) (Art 55(2)).

21
What is the AI Office? (Art 56, 56a and 56b)

Art 56 establishes the ‘European Artificial Intelligence Office’ (the ‘AI Office’) which will be an independent body of the Union and the seat of which will be Brussels. It shall have legal personality.

The tasks of the AI Office include:

  • monitor and ensure the effective and consistent application of the EU AI Act , without prejudice to the tasks of national supervisory authorities (Art 56b);
  • collect and share Member States’ expertise and best practices and to assist Member States national supervisory authorities and the Commission in developing the organizational and technical expertise required for the implementation of this Regulation, including by means of facilitating the creation and maintenance of a Union pool of experts (Art 56d);
  • provide interpretive guidance on how the EU AI Act applies to the ever evolving typology of AI value chains, and what the resulting implications in terms of accountability of all the entities involved will be under the different scenarios based on the generally acknowledged state of the art, including as reflected in relevant harmonized standards (Art 56p)
  • promote AI literacy pursuant to Article 4b (Art 56s).

The AI Office will be accountable to the European Parliament and to the Council in accordance with the EU AI Act.

22
What is the EU database? (Art 60)

The Commission must, in collaboration with the Member States, set up and maintain an EU database containing information referred to in Art 60(2) concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51 (Art 60(1)).

The data listed in Annex VIII (see below) must be entered into the EU database by the providers. (Art 60(2)).

23
What are the compliant AI systems which present a risk? (Art 67)

Where, having performed an evaluation under Article 65 (Procedure for dealing with AI systems presenting a risk at national level), the market surveillance authority of a Member State finds that although an AI system is in compliance with the EU AI Act, it presents a risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights or to other aspects of public interest protection, it must require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk, to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe (Art 67(1)).

The provider or other relevant operators must ensure that corrective action is taken in respect of all the AI systems concerned that they have made available on the market throughout the Union within the timeline prescribed by the market surveillance authority of the Member State referred to in paragraph 1 (Art 67(1)).

The Member State must immediately inform the Commission and the other Member States. That information shall include all available details, in particular the data necessary for the identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and duration of the national measures taken (Art 67(3)).

The Commission must without delay enter into consultation with the Member States and the relevant operator and shall evaluate the national measures taken. On the basis of the results of that evaluation, the Commission shall decide whether the measure is justified or not and, where necessary, propose appropriate measures (Art 67(4)).

24
What happens when a provider is non-compliant? (Art 68)

Where the market surveillance authority of a Member State makes one of the following findings, it must require the relevant provider to put an end to the noncompliance concerned:

  1. the conformity marking has been affixed in violation of Article 49;

  2. the conformity marking has not been affixed;

  3. the EU declaration of conformity has not been drawn up;

  4. the EU declaration of conformity has not been drawn up correctly;

  5. the identification number of the notified body, which is involved in the conformity assessment procedure, where applicable, has not been affixed;

  6. the technical documentation is not available;

  7. the registration in the EU database has not been carried out;

  8. where applicable, the authorised representative has not been appointed. (Art 68(1)).

Where the non-compliance referred to in Art 68(1) persists, the Member State concerned must take all appropriate measures to restrict or prohibit the high-risk AI system being made available on the market or ensure that it is recalled or withdrawn from the market (Art 68(2)).

25
Who has the right to lodge a complaint with a national supervisory authority? (Art 68a)

Without prejudice to any other administrative or judicial remedy, every natural person or groups of natural persons shall have the right to lodge a complaint with a national supervisory authority, in particular in the Member State of his or her habitual residence, place of work or place of the alleged infringement if they consider that the AI system relating to him or her infringes this Regulation (Art 68a1).

The national supervisory authority with which the complaint has been lodged must inform the complainant on the progress and the outcome of the complaint including the possibility of a judicial remedy pursuant to Article 78 (Art 68a2).

26
Who has the right to an effective judicial remedy against a national supervisory authority (Art 68b)

Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision of a national supervisory authority concerning them (Art 68(b1)).

Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy where the national supervisory authority which is competent pursuant to Articles 59 does not handle a complaint or does not inform the data subject within three months on the progress or outcome of the complaint lodged pursuant to Article 68a (Art 68(b2)).

Proceedings against a national supervisory authority shall be brought before the courts of the Member State where the national supervisory authority is established (Art 68(b4))).

Where proceedings are brought against a decision of a national supervisory authority which was preceded by an opinion or a decision of the Commission in the union safeguard procedure, the supervisory authority shall forward that opinion or decision to the court (Art 68(b4)).

27
What right has an affected person in respect of and subject to a decision taken by a deployer (Art 68(c))

Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system which produces legal effects or similarly significantly affects him or her in a way that they consider to adversely impact their health, safety, fundamental rights, socio-economic well-being or any other of the rights deriving from the obligations laid down in the EU AI Act, shall have the right to request from the deployer clear and meaningful explanation pursuant to Article 13(1) (Transparency and provision of information) on the role of the AI system in the decision making procedure, the main parameters of the decision taken and the related input data (Art 68(c)(1)).

28 What are the legal requirements for AI systems other than high risk AI systems? (Art 69)

‘General purpose AI system’ means an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed (Art 3(1d)).

The Commission, the AI Office and the Member States shall encourage and facilitate the drawing up of codes of conduct intended, including where they are drawn up in order to demonstrate how AI systems respect the principles set out in Article 4a and can thereby be considered trustworthy, to foster the voluntary application to AI systems other than high risk AI systems of the requirements set out in Title III, Chapter 2 on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems (Art 69(1)).

Codes of conduct intended to foster the voluntary compliance with the principles underpinning trustworthy AI systems, must, in particular:

  1. aim for a sufficient level of AI literacy among their staff and other persons dealing with the operation and use of AI systems in order to observe such principles;

  2. assess to what extent their AI systems may affect vulnerable persons or groups of persons, including children, the elderly, migrants and persons with disabilities or whether measures could be put in place in order to increase accessibility, or otherwise support such persons or groups of persons;

  3. consider the way in which the use of their AI systems may have an impact or can increase diversity, gender balance and equality;

  4. have regard to whether their AI systems can be used in a way that, directly or indirectly, may residually or significantly reinforce existing biases or inequalities;

  5. reflect on the need and relevance of having in place diverse development teams in view of securing an inclusive design of their systems;

  6. give careful consideration to whether their systems can have a negative societal impact, notably concerning political institutions and democratic processes;

  7. evaluate how AI systems can contribute to environmental sustainability and in particular to the Union’s commitments under the European Green Deal and the European Declaration on Digital Rights and Principles (Art 69(2)).

Codes of Conduct may be drawn up by individual providers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders, including scientific researchers, and their representative organisations, in particular trade unions, and consumer organisations. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems. Providers adopting codes of conduct will designate at least one natural person responsible for internal monitoring (Art 69(3)).

The Commission and the AI Office must take into account the specific interests and needs of SMEs and start-ups when encouraging and facilitating the drawing up of codes of conduct (Art 69(4)).

(END)


© Paul Foley Law 2023. All rights Reserved. Learn more about our legal services in this area HERE >

To contact us: use the CONTACT PAGE >
or
Email: paul@paulfoleylaw.ie

Full copyright policy HERE >
map-markerenvelopetagarrow-left linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram