paul@paulfoleylaw.ie
22 Northumberland Road, Dublin D04 ED73, Ireland, EU
INTRO
INSIGHTS

All about the EU AI Act (Regulation (EU) 2024/1689)

By
Paul Foley
EU AI Act: the problems with AI, ChatGPT, Gemini AI and the EU AI Act, the EU AI Pact, AI and financial services, AI and data protection, the key provisions of the EU AI Act.

(About Paul Foley Law)

Paul Foley Law provides advice and drafting (including on required due diligence, regulatory advice on AI purpose determination and compliance, contracts, licences, policies provision, appointment of authorised representatives, impact on data protection) related to the, procurement, import, deployment (internally and externally), transfer of deployer role to other operators, distribution of general purpose AI systems whether with or without systemic risk.
Contact: paul@paulfoleylaw.ie


The EU AI Act is now published in the official journal and will finally come into force in August 2024.

Since the placing on the market of general purpose AI in 2023, aside from the wondrous claims made for such systems, there have been a plethora of data protection, information security, and intellectual property (particularly copyright breaches) legal issues. There have also been issues with incorrect and unexpected outputs from such systems, model hallucinations and bias in information and work processes where such systems have been used.     

The most advanced general purpose AI models, namely OpenAI's GPT-4 and likely Google DeepMind's Gemini are likely to be regulated as general purpose AI models with systemic risk under the EU AI Act.

In parallel to the EU AI Act, the EU Commission has and is promoting the AI Pact, seeking industry's voluntary commitment to anticipate the EU AI Act and to start implementing the EU AI Act’s requirements ahead of the legal deadlines (essentially staggered implementation: see below).

Additionally, back in January 24, the Commission launched a package of measures to support European startups and SMEs in the development of trustworthy Artificial Intelligence (AI) that respects EU values and rules: see here:

https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383

These measures align with Chapter VI (Measures in support of innovation in Arts 57 to 63) of the EU AI Act. 

With regard to data protection and AI, the CNIL in France has published very helpful guidance which can be accessed here:

https://www.cnil.fr/fr/ai-how-to-sheets

As regards financial services providers, deployers of high-risk AI systems referred to in points 5 (b) and 5 (c) of Annex III ((b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud; (c) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance), will be required to  perform an assessment of the impact on fundamental rights that the use of such system may produce (Art 27(1)).

Additionally, the EU Commission (DG FISMA) (in order to gain a better understanding of how and for which purposes AI applications can be used in the financial sector) are inviting stakeholders to share their experience and views on use cases, benefits, barriers, risks as well as their needs. Responses received will enable the Commission to provide guidance to the financial sector for the implementation of the AI Act in their specific market areas.

Views of financial services providers (the deadline is 13 September) should be shared here:
Targeted consultation on artificial intelligence in the financial sector - European Commission (europa.eu)

Separately ESMA has issued on the 30.5.2024, a Public Statement On the use of Artificial Intelligence (AI) in the provision of retail investment services (ESMA35-335435667-5924 ESMA).

As regards Member States, the EU AI Act which has direct effect, requires Member States to provide for the supervision and enforcement of the EU AI Act at national level. Member States must designate or establish at least one notifying authority, and one or more market surveillance authorities, as national competent authorities for the EU AI Act within 12 months after it enters into force.


What the EU AI Act does

The EU AI Act follows a risk-based approach, differentiating between:

  • AI practices that are prohibited (Art 5);
  • AI systems that are considered to be high risk (Art 6(1)(2)(3)(last par) and Annex III);
  • AI systems not considered to be high-risk (Art 6(3) and 6(4))
  • General purpose AI models (Article 3(63) and Art 53);
  • General purpose AI models with systemic risk(Art 51);
  • AI systems that create low or minimal risk.

Who does the EU AI Act apply to?

The EU AI Act applies to:

  1. providers placing on the market or putting into service, AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or are located within the Union or in a third country;

  2. deployers of AI systems that have their place of establishment or are located within the Union;

  3. providers and deployers of AI systems that have their place of  establishment or are located in a third country, where the output produced by the AI system is used in the Union; 

  4. importers and distributors of AI systems;

  5. product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark (Art 25 (most relevant));

  6. authorised representatives of providers, which are not established in the Union (Art 22 and Art 54);

  7. affected persons that are located in the Union (Art 85 and Art 86).

Where the EU AI Act does not apply

The EU AI Act does not apply to:

  • AI systems placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities;

  • AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities;

  • public authorities in a third country nor to international organisations, for law enforcement and judicial cooperation with the Union or with one or more Member States, provided that such a third country or international organisation provides adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals;

  • AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development;

  • any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service;

  • obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity;

  • AI systems released under free and open-source licences, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 (prohibited practices) or Article 50 (Transparency obligations for providers and deployers of certain AI systems). (Art 2(3-12)).

Prohibited Practices

The following practices are prohibited:

The ‘placing on the market’, the ‘putting into service’ or the use of an AI system that:

  1. deploys subliminal techniques beyond a person’s consciousnessor purposefully manipulative or deceptive techniques, which materially distort the behaviour of a person by impairing informed decision making, causing the person to take a decision that is reasonably likely to cause that person, significant harm;

  2. exploits any of the vulnerabilities of a natural person due to the person’s age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person in a manner that causes or is reasonably likely to cause that person significant harm;

  3. evaluate or classify natural persons based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:

    (i) detrimental or unfavourable treatment of certain natural persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected;

    (ii) detrimental or unfavourable treatment of certain natural persons that is unjustified or disproportionate to their social behaviour or its gravity;

  4. make risk assessments of natural persons in order to assess or predict the risk of the person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition will not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;

  5. create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;

  6. to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;

  7. biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;

  8. the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives:

    (i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons;

    (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;

    (iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years.

The use of realtime remote biometric identification system(s) in public spaces is subject to further controls in Arts 5(2) to 5(5) including the completion of a fundamental rights impact assessment (Art 27) and registration of the system in the EU database according to Art 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Additionally the same Art 5, requires prior authorisation granted by a judicial authority or an independent administrative authority (Art 5).

What is a general purposes AI model

general-purpose AI model means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market (Art 3(63)).

AI literacy (Art 4)

providers and deployers of AI systems must take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.

Obligations for providers of general purpose AI models (Art 53)

Providers of general-purpose AI models must:

  1. draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which must contain, at a minimum, the information set out in Annex XI (Technical documentation referred to in Article 53(1), point (a) - technical documentation for providers of general-purpose AI models) for the purpose of providing it, upon request, to the AI Office and the national competent authorities;

  2. draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation must:

    (i) enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations under this EU AI Act; and

    (ii) contain, at a minimum, the elements set out in Annex XII (Transparency information referred to in Article 53(1), point (b) - technical documentation for providers of general-purpose AI models to downstream providers that integrate the model into their AI system).

  3. put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Art 4(3) of Directive (EU) 2019/790 (on copyright and related rights in the Digital Single Market);

  4. draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office (Art 53(1)).

The obligations set out in Art 53(1) (a) and (b), will not apply to providers of AI models that are released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available. This exception shall not apply to general-purpose AI models with systemic risks (Art 53(2)).

providers of general-purpose AI models must cooperate as necessary with the Commission and the national competent authorities in the exercise of their competences and powers pursuant to this EU AI Act (Art 53(3)).

providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 (Codes of Practice) to demonstrate compliance with the obligations set out in Art 53(1), until a harmonised standard is published (Art 53(4)).

Transparency obligations for certain AI systems (Art 50)

The Art 50 obligations (below) are very significant. Non-compliance with Art 50 by an operator (a provider, product manufacturer, deployer, authorised representative, importer or distributor) or notified bodies, other than those laid down in Articles 5, will be subject to administrative fines of up to 15,000.000 EUR or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher (Art 99(4)).

providers must ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation must not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence (Art 50(1)).

providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, must ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.

This obligation will not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences (Art 50(2)).

deployers of an emotion recognition system or a biometric categorisation system must inform the natural persons exposed thereto of the operation of the system, and must process the personal data in accordance with Regulation (EU) 2016/679 (GDPR) and Regulation (EU) 2018/1725 (on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data) and Directive (EU) 2016/680 (on the protection of natural persons with regard to the processing of personal for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data,) as applicable. This obligation must not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law (Art 50(3)).

deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest must disclose that the text has been artificially generated or manipulated. This obligation must not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content (Art 50(4)).

The information referred to in Art 50(1) to Art 50(4) must be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information must conform to the applicable accessibility requirements (Art 50(5)).

The AI Office must encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content (Art 50(7) part of).

Authorised Representatives (Art 54)

Prior to placing a general-purpose AI model on the Union market, providers established in third countries must, by written mandate, appoint an authorised representative which is established in the Union (Art 54(1)).

The provider must enable its authorised representative to perform the tasks specified in the mandate received from the provider (Art 54(2)).

The authorised representative must perform the tasks specified in the mandate received from the provider. It must provide a copy of the mandate to the AI Office upon request, in one of the official languages of the institutions of the Union. For the purposes of this EU AI Act, the mandate must empower the authorised representative to carry out the following tasks:

  1. verify that the technical documentation specified in Annex XI (Technical documentation referred to in Article 53(1), point (a) - technical documentation for providers of general-purpose AI models) has been drawn up and all obligations referred to in Art 53 and, where applicable, Art 55 have been fulfilled by the provider;

  2. keep a copy of the technical documentation specified in Annex XI at the disposal of the AI Office and national competent authorities, for a period of 10 years after the general-purpose AI model has been placed on the market, and the contact details of the provider that appointed the authorised representative;

  3. provide the AI Office, upon a reasoned request, with all the information and documentation, including that referred to in point (b), necessary to demonstrate compliance with the obligations in this Chapter (Chapter V General-purpose AI models);

  4. cooperate with the AI Office and competent authorities, upon a reasoned request, in any action they take in relation to the general-purpose AI model, including when the model is integrated into AI systems placed on the market or put into service in the Union (Art 54(3)).

The mandate must empower the authorised representative to be addressed, in addition to or instead of the provider, by the AI Office or the competent authorities, on all issues related to ensuring compliance with this EU AI Act (Art 54(4)).

The authorised representative must terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this EU AI Act. In such a case, it must also immediately inform the AI Office about the termination of the mandate and the reasons therefor (Art 54(5)).

The obligation set out in this Art 54 will not apply to providers of general-purpose AI models that are released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available, unless the general-purpose AI models present systemic risks (Art 54(6)).

Classification of general-purpose AI models as general-purpose AI models with systemic risk (Art 51)

A general-purpose AI model will be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions:

  1. it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks;

  2. based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII (Criteria for the designation of general-purpose AI models with systemic risk referred to in Article 51) (Art 51(1)).

A general-purpose AI model will be presumed to have high impact capabilities under Art 51(1)(a), when the cumulative amount of computation used for its training measured in floating point operation (FLOP) is greater than 1025 (Art 51(2)).

The Commission state that this threshold, captures the currently most advanced GPAI models, namely OpenAI's GPT-4 and likely Google DeepMind's Gemini.

The capabilities of the models above this threshold are not yet well enough understood, the Commission argue. They could pose systemic risks, and therefore it is reasonable to subject their providers to the additional set of obligations.

FLOP the Commission state is a first proxy for model capabilities, and the exact FLOP threshold can be updated upwards or downwards by the European AI Office, e.g. in the light of progress in objectively measuring model capabilities and of developments in the computing power needed for a given performance level.

The Commission is required to adopt delegated acts using Art 97 (Exercise of the delegation. The power to adopt delegated acts is conferred on the Commission subject to the conditions laid down in this Article) to amend the thresholds listed in Art 51(1) and Art 51(2) as well as to supplement benchmarks and indicators in light of evolving technological developments, such as algorithmic improvements or increased hardware efficiency, when necessary, for these thresholds to reflect the state of the art (Art 51(3)).

Procedure (Art 52)

Where a general-purpose AI model meets the condition referred to in Art 51(1)(a), the relevant provider must notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met. That notification must include the information necessary to demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk (Art 52(1)).

The provider of a general-purpose AI model that meets the condition referred to in Art 51(1)(a), may present, with its notification, sufficiently substantiated arguments to demonstrate that, exceptionally, although it meets that requirement, the general-purpose AI model does not present, due to its specific characteristics, systemic risks and therefore should not be classified as a general-purpose AI model with systemic risk (Art 52(2)).

Where the Commission concludes that the arguments submitted under Art 52(2) are not sufficiently substantiated and the relevant provider was not able to demonstrate that the general-purpose AI model does not present, due to its specific characteristics, systemic risks, it must reject those arguments, and the general-purpose AI model must be considered to be a general-purpose AI model with systemic risk (Art 52(3)).

The Commission may designate a general-purpose AI model as presenting systemic risks, ex officio or following a qualified alert from the scientific panel under Art 90(1)(a) (Art 90 Alerts of systemic risks by the scientific panel) on the basis of criteria set out in Annex XIII (Criteria for the designation of general-purpose AI models with systemic risk referred to in Article 51). The Commission is empowered to adopt delegated acts in accordance with Art 97 in order to amend Annex XIII by specifying and updating the criteria set out in that Annex (Art 52(4)).

Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk under Art 52(4) the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. Such a request must contain objective, detailed and new reasons that have arisen since the designation decision. providers may request reassessment at the earliest six months after the designation decision. Where the Commission, following its reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months after that decision (Art 52(5)).

The Commission must ensure that a list of general-purpose AI models with systemic risk is published and must keep that list up to date, without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law. (Art 52(6)).

Obligations of providers of general-purpose AI models with systemic risk (Art 55)

In addition to the obligations listed in Art 53 (Obligations for providers of general purpose AI models) and Art 54 (Authorised Representatives), providers of general-purpose AI models with systemic risk must:

  1. perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks;

  2. assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk;

  3. keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them;

  4. ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model (Art 55(1)).

Providers of general-purpose AI models with systemic risk may rely on codes of practice (Art 56) to demonstrate compliance with the obligations set out in Art 55(1), until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations (Art 55(2)).

Classification rules for high-risk AI systems (Art 6)

Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in Art 6(1)(a) and (b), that AI system will be considered to be high-risk where both of the following conditions are fulfilled:

  1. (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I (List of Union harmonisation legislation);

  2. (b) the product whose safety component pursuant to Art 6(1)(a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I (Art 6(1)).

In addition to the high-risk AI systems referred to in Art 6(1), AI systems referred to in Annex III will be considered to be high-risk (Art 6(2)).

AI Systems listed in Annex III not considered high risk (Art 6(3) and 6(4))

An AI system referred to in Annex III will not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.

The previous paragraph will apply where any of the following conditions is fulfilled:

  1. the AI system is intended to perform a narrow procedural task;

  2. the AI system is intended to improve the result of a previously completed human activity;

  3. the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or

  4. the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III (High-risk AI systems referred to in Art 6(2) (Art 6(3)).

However an AI system referred to in Annex III will always be considered to be high-risk where the AI system performs profiling of natural persons(Art 6(3) last paragraph).

A provider who considers that an AI system referred to in Annex III is not high-risk must document its assessment before that system is placed on the market or put into service. Such provider will be subject to the registration obligation set out in Article 49(2) (Registration). Upon request of national competent authorities, the provider must provide the documentation of the assessment (Art 6(4)).

The Commission must, after consulting the European Artificial Intelligence Board (the ‘Board’), and no later than 18 months from the date of entry into force of this EU AI Act, provide guidelines specifying the practical implementation of this Art in line with Art 96 (Guidelines from the Commission on the implementation of this Regulation) together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk (Art 6(5)).

Requirements for High Risk AI Systems (Section 2 (Arts 8 to 15))

Arts 8 to 15 of the EU AI Act, contain complex specific requirements for high risk AI systems.

These requirements, the Commission argued at an early stage, are already state-of-the-art for many diligent operators and are the result of two years of preparatory work, derived from the Ethics Guidelines of the High-Level Expert Group on Artificial Intelligence (HLEG29), piloted by more than 350 organisations.

Compliance (Art 8)

High-risk AI systems must comply with the requirements laid down in Section 2 (Arts 8 to 15), taking into account their intended purpose as well as the generally acknowledged state of the art on AI and AI-related technologies. The risk management system referred to in Art 9 must be taken into account when ensuring compliance with those requirements (Art 8(1)).

Art 8(2) provides a mechanism for avoiding duplication where an AI system is required to comply with the EU AI Act and also with Union harmonisation legislation listed in Section A of Annex I.

Risk management system (Art 9)

A risk management system must be established, implemented, documented and maintained in relation to high-risk AI systems (Art 9(1)).  Arts 9(2) and (10) (not reproduced here) explains what this must cover and what must be ensured and carried out.

Data and data governance (Art 10)

High-risk AI systems which make use of techniques involving the training of AI models with data must be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in Arts 10(2) to 10(5) (not reproduced here) whenever such data sets are used (Art 10(1)).

For the development of high-risk AI systems not using techniques involving the training of AI models, Art 10(2) to Art 10(5) apply only to the testing data sets (Art 10(5)).

Technical documentation (Art 11)

The technical documentation of a high-risk AI system must be drawn up before that system is placed on the market or put into service and must be kept up-to date. The technical documentation must be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in Section 2 (Arts 8 to 15) and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It must contain, at a minimum, the elements set out in Annex IV (Technical documentation referred to in Article 11(1)) (Art 11(1)).

SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission will establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies must accept the form for the purposes of the conformity assessment (Art 11(2)).

Where a high-risk AI system related to a product covered by the Union harmonisation legislation listed in Section A of Annex I is placed on the market or put into service, a single set of technical documentation must be drawn up containing all the information set out in Art 11(1), as well as the information required under those legal acts (Art 11(2)).

Record-keeping (Art 12)

High-risk AI systems must technically allow for the automatic recording of events (logs) over the lifetime of the system (Art 12(1)).

In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities must enable the recording of events relevant for:

  1. identifying situations that may result in the high-risk AI system presenting a risk within the meaning of Art 79(1) (AI systems presenting a risk shall be understood as a ‘product presenting a risk’) or in a substantial modification;

  2. facilitating the post-market monitoring referred to in Art 72; and

  3. monitoring the operation of high-risk AI systems referred to in Art 26(5) (Deployers to monitor the operation of the high-risk AI systems) (Art 12(2)).

For high-risk AI systems referred to in point 1(a) of Annex III (remote biometric identification systems. This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be), the logging capabilities must provide, at a minimum:

  1. recording of the period of each use of the system (start date and time and end date and time of each use);

  2. the reference database against which input data has been checked by the system;

  3. the input data for which the search has led to a match;

  4. the identification of the natural persons involved in the verification of the results, as referred to in Article 14(5) (Art 12(3)).

Transparency and provision of information to deployers (Art 13)

High-risk AI systems must be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately (Art 13(1) part of).

High-risk AI systems must be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers (Art 13(2)).

The instructions for use must contain at least the following information:

  1. the identity and the contact details of the provider and, where applicable, of its authorised representative

  2. the characteristics, capabilities and limitations of performance of the high-risk AI system, including:

    (i) its intended purpose;


    (ii) the level of accuracy, including its metrics, robustness and cybersecurity referred to in Art 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;


    (iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Art 9(2);


    (iv) where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information that is relevant to explain its output;


    (v) when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used;


    (vi) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the high-risk AI system;


    (vii) where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately;

  3. the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any;

  4. the human oversight measures referred to in Art 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers;

  5. the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates;

  6. where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Art 12 (Art 13(3)).

Human oversight (Art 14)

High-risk AI systems must be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in us (Art 14(1)).

Human oversight must aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge (Art 14(2) part of).

The oversight measures must be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and must be ensured through either one or both of the following types of measures: (a) measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service; (b) measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer (Art 14(3)).

For the purpose of implementing paragraphs 14(1), 14(2) (not reproduced here) and 14(3), the high-risk AI system must be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate:

  1. to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance;

  2. to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;

  3. to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available;

  4. to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system;

  5. to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.

For high-risk AI systems referred to in point 1(a) of Annex III (remote biometric identification systems. This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be) the measures referred to in Art 14(3) must be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority.

The requirement for a separate verification by at least two natural persons must not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate (Art 14(5)).

Accuracy, robustness and cybersecurity (Art 15)

High-risk AI systems must be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle (Art 15(1)).

To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in Art 15(1)and any other relevant performance metrics, the Commission must, in cooperation with relevant stakeholders and organisations such as metrology and benchmarking authorities, encourage, as appropriate, the development of benchmarks and measurement methodologies (Art 15(2)).

The levels of accuracy and the relevant accuracy metrics of high-risk AI systems must be declared in the accompanying instructions of use (Art 15(3)).

High-risk AI systems must be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard. The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans.

High-risk AI systems that continue to learn after being placed on the market or put into service must be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures (Art 15(4)).

High-risk AI systems must be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities. The technical solutions aiming to ensure the cybersecurity of high-risk AI systems must be appropriate to the relevant circumstances and the risks. The technical solutions to address AI specific vulnerabilities must include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws (Art 15(5)).

Obligations of providers of high-risk AI systems (Art 16)

The substance of provider obligations are found in Art 16. Amongst the most important are:

providers of high-risk AI systems must:

  1. ensure that their high-risk AI systems are compliant with the requirements set out in Section 2 (Arts 8 to 15);

  2. indicate on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable, their name, registered trade name or registered trade mark, the address at which they can be contacted;

  3. have a quality management system in place which complies with Art 17;

  4. keep the documentation referred to in Art 18;

  5. when under their control, keep the logs automatically generated by their high-risk AI systems as referred to in Art 19;

  6. ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Art 43, prior to its being placed on the market or put into service

  7. draw up an EU declaration of conformity in accordance with Art 47;

  8. affix the CE marking to the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, to indicate conformity with this EU AI Act, in accordance with Art 48;

  9. comply with the registration obligations referred to in Art 49(1);

  10. take the necessary corrective actions and provide information as required in Art 20;

  11. upon a reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2 (Arts 8 to 15);

  12. ensure that the high-risk AI system complies with accessibility requirements in accordance with Directive (EU) 2016/2102 (on the accessibility of the websites and mobile applications of public sector bodies) and Directive (EU) 2019/882 (on the accessibility requirements for products and services) (Art 16).

Quality management system (Art 17)

providers of high-risk AI systems must put a quality management system in place that ensures compliance with this EU AI Act. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and must include at least the following aspects:

  1. a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system;

  2. techniques, procedures and systematic actions to be used for the design, design control and design verification of the high-risk AI system;

  3. techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of the high-risk AI system;

  4. examination, test and validation procedures to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out;

  5. technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full or do not cover all of the relevant requirements set out in Section 2 (Arts 8 to 15), the means to be used to ensure that the high-risk AI system complies with those requirements;

  6. systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purpose of the placing on the market or the putting into service of high-risk AI systems;

  7. the risk management system referred to in Art 9;

  8. the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Art 72;

  9. procedures related to the reporting of a serious incident in accordance with Art 73;

  10. the handling of communication with national competent authorities, other relevant authorities, including those providing or supporting the access to data, notified bodies, other operators, customers or other interested parties;

  11. systems and procedures for record-keeping of all relevant documentation and information;

  12. resource management, including security-of-supply related measures;

  13. an accountability framework setting out the responsibilities of the management and other staff with regard to all the aspects listed in this paragraph (Art 17(1)).

The implementation of the aspects referred to in Art 16(1)) must be proportionate to the size of the provider’s organisation.

providers must, in any event, respect the degree of rigour and the level of protection required to ensure the compliance of their high-risk AI systems with this EU AI Act (Art 17(2)).

providers of high-risk AI systems that are subject to obligations regarding quality management systems or an equivalent function under relevant sectoral Union law may include the aspects listed in Art 16(1) (Obligations of providers of high-risk AI systems) as part of the quality management systems pursuant to that law (Art 17(3)).

For providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the obligation to put in place a quality management system, with the exception of Art 17(1) points (g), (h) and (i) of this Art, will be deemed to be fulfilled by complying with the rules on internal governance arrangements or processes pursuant to the relevant Union financial services law. To that end, any harmonised standards referred to in Article 40 shall be taken into account (Art 17(4)).

Documentation keeping (Art 18)

The provider must, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities:

  1. the technical documentation referred to in Art 11;

  2. the documentation concerning the quality management system referred to in Art 17;

  3. the documentation concerning the changes approved by notified bodies, where applicable;

  4. the decisions and other documents issued by the notified bodies, where applicable;

  5. (e) the EU declaration of conformity referred to in Art 47 (Art 18(1))

Each Member State will determine conditions under which the documentation referred to in Art 18(1)remains at the disposal of the national competent authorities for the period indicated in Art 18(1)for the cases when a provider or its authorised representative established on its territory goes bankrupt or ceases its activity prior to the end of that period (Art 18(2)).

providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law must maintain the technical documentation as part of the documentation kept under the relevant Union financial services law (Art 18(3)).

Automatically generated logs (Art 19)

Providers of high-risk AI systems shall keep the logs referred to in Art 12(1) (Technical documentation) automatically generated by their high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union or national law, the logs must be kept for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in the applicable Union or national law, in particular in Union law on the protection of personal data (Art 19(1)).

providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs automatically generated by their high-risk AI systems as part of the documentation kept under the relevant financial services law (Art 19(2)).

Corrective actions and duty of information (Art 20)

providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have placed on the market or put into service is not in conformity with this EU AI Act must immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They must inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised representative and importers accordingly (Art 20(1)).

Where the high-risk AI system presents a risk within the meaning of Art 79(1) (AI systems presenting a risk shall be understood as a ‘product presenting a risk’) and the provider becomes aware of that risk, it must immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Art 44 (certificates), in particular, of the nature of the non-compliance and of any relevant corrective action taken (Art 20(2)).

Cooperation with competent authorities (Art 21)

providers of high-risk AI systems must, upon a reasoned request by a competent authority, provide that authority all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2 (Arts 8 to 15), in a language which can be easily understood by the authority in one of the official languages of the institutions of the Union as indicated by the Member State concerned (Art 21(1)).

Upon a reasoned request by a competent authority, providers shall also give the requesting competent authority, as applicable, access to the automatically generated logs of the high-risk AI system referred to in Art 12(1), to the extent such logs are under their control (Art 21(2)).

Any information obtained by a competent authority pursuant to this Art must be treated in accordance with the confidentiality obligations set out in Art 78 (Art 21(3)).

Authorised representatives of providers of high-risk AI systems (Art 22)

Prior to making their high-risk AI systems available on the Union market, providers established in third countries must, by written mandate, appoint an authorised representative which is established in the Union. The provider must enable its authorised representative to perform the tasks specified in the mandate received from the provider. The remainder of the Art (not reproduced here) includes the obligations of the authorised representative in the case of high risk systems.

Obligations of importers (Art 23)

Before placing a high-risk AI system on the market, importers must ensure that the system is in conformity with the EU AI Act by verifying that:

  1. the relevant conformity assessment procedure referred to in Art 43 has been carried out by the provider of the high-risk AI system;

  2. the provider has drawn up the technical documentation in accordance with Article 11 and Annex IV;

  3. the system bears the required CE marking and is accompanied by the EU declaration of conformity referred to in Art 47 and instructions for use;

  4. (d) the provider has appointed an authorised representative in accordance with Art 22(1) (Art 23 (1))

Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this EU AI Act, or is falsified, or accompanied by falsified documentation, it shall not place the system on the market until it has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Art 79(1) (AI systems presenting a risk shall be understood as a ‘product presenting a risk’), the importer must inform the provider of the system, the authorised representative and the market surveillance authorities to that effect (Art 23(2)).

importers must indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system and on its packaging or its accompanying documentation, where applicable (Art 23(3)).

importers must ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise its compliance with the requirements set out in Section 2 (Arts 8 to 15)  (Art 23(4)).

Importers must keep, for a period of 10 years after the high-risk AI system has been placed on the market or put into service, a copy of the certificate issued by the notified body, where applicable, of the instructions for use, and of the EU declaration of conformity referred to in Art 47 (Art 23(5)).

importers must provide the relevant competent authorities, upon a reasoned request, with all the necessary information and documentation, including that referred to in Art 23(5), to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2 (Arts 8 to 15) in a language which can be easily understood by them. For this purpose, they must also ensure that the technical documentation can be made available to those authorities (Art 23(6)).

importers must cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system placed on the market by the importers, in particular to reduce and mitigate the risks posed by it (Art 23(7)).

Obligations of distributors (Art 24)

Before making a high-risk AI system available on the market, distributors must verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Art 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Art 16, points (b) and (c) and Art 23(3) (Art 24(1)).

Where a distributor considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system is not in conformity with the requirements set out in Section 2 (Arts 8 to 15), it must not make the high-risk AI system available on the market until the system has been brought into conformity with those requirements. Furthermore, where the high-risk AI system presents a risk within the meaning of Art 79(1) (AI systems presenting a risk shall be understood as a ‘product presenting a risk’), the distributor must inform the provider or the importer of the system, as applicable, to that effect (Art 24(2)).

distributors must ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in Section 2 (Art 24(3)).

A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2 (Arts 8 to 15), it must take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or must ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Art 79(1) (AI systems presenting a risk shall be understood as a ‘product presenting a risk)’, the distributor must immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken (Art 24(4)).

Upon a reasoned request from a relevant competent authority, distributors of a high-risk AI system must provide that authority with all the information and documentation regarding their actions pursuant to Art 24(1) to Art 24(4) necessary to demonstrate the conformity of that system with the requirements set out in Section 2 (Arts 8 to 15) (Art 24(5)).

distributors must cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system made available on the market by the distributors, in particular to reduce or mitigate the risk posed by it (Art 24(6)).

Responsibilities along the AI value chain (Art 25)

Any distributor, importer, deployer or other third-party will be considered to be a provider of a high-risk AI system and will be subject to the obligations of the provider under Art 16, in any of the following circumstances:

  1. they put their name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligations are otherwise allocated;

  2. they make a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service in such a way that it remains a high-risk AI system pursuant to Art 6.

  3. they modify the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the AI system concerned becomes a high-risk AI system in accordance with Art 6 (Art 25(1)).

Where the circumstances referred to in (Art 25(1)) occur, the provider that initially placed the AI system on the market or put it into service will no longer be considered to be a provider of that specific AI system for the purposes of this EU Ai Act. That initial provider must closely cooperate with new providers and must make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this EU AI Act, in particular regarding the compliance with the conformity assessment of high-risk AI systems.

This paragraph will not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation (Art 25(2)).

In the case of high-risk AI systems that are safety components of products covered by the Union harmonisation legislation listed in Section A of Annex I, the product manufacturer will be considered to be the provider of the high-risk AI system, and shall be subject to the obligations under Art 16 under either of the following circumstances:

  1. the high-risk AI system is placed on the market together with the product under the name or trademark of the product manufacturer;

  2. the high-risk AI system is put into service under the name or trademark of the product manufacturer after the product has been placed on the market (Art 25(3)).

The provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system must, by written agreement, specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with the obligations set out in this EU AI Act. This paragraph shall not apply to third parties making accessible to the public tools, services, processes, or components, other than general-purpose AI models, under a free and open-source licence.

The AI Office may develop and recommend voluntary model terms for contracts between providers of high-risk AI systems and third parties that supply tools, services, components or processes that are used for or integrated into high-risk AI systems. When developing those voluntary model terms, the AI Office shall take into account possible contractual requirements applicable in specific sectors or business cases. The voluntary model terms shall be published and be available free of charge in an easily usable electronic format (Art 25(4)).

Art 25(2) and (3) are without prejudice to the need to observe and protect intellectual property rights, confidential business information and trade secrets in accordance with Union and national law (Art 25(5))

Obligations of deployers of high-risk AI systems (Art 26)

deployers of high-risk AI systems must take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems, pursuant to Art 26(3) and Art 26(6) (Art 26(1)).

deployers must assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support (Art 26(2)).

The obligations set out in Art 26(1) and Art 26(2) are without prejudice to other deployer obligations under Union or national law and to the deployer’s freedom to organise its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider (Art 26(3)).

Without prejudice to Art 26(1) and (2)to the extent the deployer exercises control over the input data, that deployer must ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Art 26(4)).

deployers must monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Art 72 (Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems). Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Art 79(1) (AI systems presenting a risk shall be understood as a ‘product presenting a risk’) they must, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and must suspend the use of that system. Where deployers have identified a serious incident, they must also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Art 73 (Reporting of serious incidents) will apply mutatis mutandis. This obligation will not cover sensitive operational data of deployers of AI systems which are law enforcement authorities (Art 26(5)).

For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law.

deployers of high-risk AI systems must keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data.

deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law must maintain the logs as part of the documentation kept pursuant to the relevant Union financial service law (Art 26(6)).

Before putting into service or using a high-risk AI system at the workplace, deployers who are employers must inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information must be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives (Art 26(7)).

deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies must comply with the registration obligations referred to in Art 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Art 71, they must not use that system and must inform the provider or the distributor (Art 26(8)).

Where applicable, deployers of high-risk AI systems must use the information provided under Art 13 of this EU AI Act to comply with their obligation to carry out a data protection impact assessment under Article 35 of this EU AI Act (EU) 2016/679 (GDPR) or Article 27 of Directive (EU) 2016/680 (on the protection of natural persons with regard to the processing of personal for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data)  (Art 26(9)).

Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification must request an authorisation, ex ante, or without undue delay and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence. If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric identification system linked to that requested authorisation must be stopped with immediate effect and the personal data linked to the use of the high-risk AI system for which the authorisation was requested must be deleted.

In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It must be ensured that no decision that produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of such post-remote biometric identification systems.

This paragraph is without prejudice to Article 9of Directive (EU) 2016/679 (GDPR)  and Art 10 of Directive (EU) 2016/680 for the processing of biometric data. Regardless of the purpose or deployer, each use of such high-risk AI systems must be documented in the relevant police file and must be made available to the relevant market surveillance authority and the national data protection authority upon request, excluding the disclosure of sensitive operational data related to law enforcement.

This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680 (on the protection of natural persons with regard to the processing of personal for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data) on supervisory authorities. deployers must submit annual reports to the relevant market surveillance and national data protection authorities on their use of post-remote biometric identification systems, excluding the disclosure of sensitive operational data related to law enforcement. The reports may be aggregated to cover more than one deployment. Member States may introduce, in accordance with Union law, more restrictive laws on the use of post-remote biometric identification systems (Art 26(10)).

Without prejudice to Art 50 of this EU AI Act, deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons must inform the natural persons that they are subject to the use of the high-risk AI system. For high-risk AI systems used for law enforcement purposes Article 13 of Directive (EU) 2016/680 shall apply (Art 26(11)).

deployers must cooperate with the relevant competent authorities in any action those authorities take in relation to the high-risk AI system in order to implement this EU AI Act (Art 26(12)).

Fundamental rights impact assessment for high-risk AI systems (Art 27)

Prior to deploying a high-risk AI system referred to in Art 6(2), with the exception of high-risk AI systems intended to be used in the area listed in point 2 of Annex III (Critical infrastructure: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity), deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III (AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud; (c) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance), must  perform an assessment of the impact on fundamental rights that the use of such system may produce

For that purpose, deployers must perform an assessment consisting of:

  1. a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;

  2. a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;

  3. the categories of natural persons and groups likely to be affected by its use in the specific context;

  4. the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to Art 13;

  5. a description of the implementation of human oversight measures, according to the instructions for use;

  6. the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms (Art 27(1)).

The obligation laid down in Art 27(1) applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the elements listed in Art 27(1)has changed or is no longer up to date, the deployer shall take the necessary steps to update the information (Art 27(2)).

Once the assessment referred to in Art 27(1)of this Article has been performed, the deployer must notify the market surveillance authority of its results, submitting the filled-out template referred to in Art 27(5)) as part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify (Art 27(3)).

Registration (Art 49)

Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high-risk AI systems referred to in point 2 of Annex III (Critical infrastructure: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity), the provider or, where applicable, the authorised representative must register themselves and their system in the EU database referred to in Art 71 (Art 49(1)).

Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Art 6(3), that provider or, where applicable, the authorised representative must register themselves and that system in the EU database referred to in Art 71 (Art 49(2)).

Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI systems listed in point 2 of Annex III (Critical infrastructure: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity) deployers that are public authorities, Union institutions, bodies, offices or agencies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Art 71 (Art 49(3)).

For high-risk AI systems referred to in points 1, 6 and 7 of Annex III, in the areas of law enforcement, migration, asylum and border control management, the registration referred to in paragraphs 1, 2 and 3 of this Art must be in a secure non-public section of the EU database referred to in Art 71 and shall include only the following information, as applicable, referred to in:

  1. Section A, points 1 to 10, of Annex VIII, with the exception of points 6, 8 and 9;

  2. Section B, points 1 to 5, and points 8 and 9 of Annex VIII;

  3. Section C, points 1 to 3, of Annex VIII;

  4. points 1, 2, 3 and 5, of Annex IX.

Only the Commission and national authorities referred to in Article 74(8) will have access to the respective restricted sections of the EU database listed in the first subparagraph of this paragraph (Art 49(4)).

High-risk AI systems referred to in point 2 of Annex III (Critical infrastructure: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity), must be registered at national level (Art 49(5)).

Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems (Art 72)

providers must establish and document a post-market monitoring system in a manner that is proportionate to the nature of the AI technologies and the risks of the high-risk AI system (Art 72(1)).

The post-market monitoring system must actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2 (Arts 8 to 15). Where relevant, post-market monitoring will include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities (Art 72(2)).

The post-market monitoring system must be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission must adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by 18 months after the entry into force of this EU AI Act. That implementing act shall be adopted in accordance with the examination procedure referred to in Art 98(2) (Art 98 Committee procedure) (Art 72(3)).

For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, where a post-market monitoring system and plan are already established under that legislation, in order to ensure consistency, avoid duplications and minimise additional burdens, providers will have a choice of integrating, as appropriate, the necessary elements described in Art 72((1)(2) and (3) using the template referred in Art 72(3) into systems and plans already existing under that legislation, provided that it achieves an equivalent level of protection. The first subparagraph of this paragraph (what this means requires clarification) will also apply to high-risk AI systems referred to in point 5 of Annex III (Access to and enjoyment of essential private services and essential public services and benefits) placed on the market or put into service by financial institutions that are subject to requirements under Union financial services law regarding their internal governance, arrangements or processes (Art 72(4)).

Reporting of serious incidents (Art 73)

providers of high-risk AI systems placed on the Union market must report any serious incident to the market surveillance authorities of the Member States where that incident occurred (Art 73(1)).

The Art 73(1) report must be made immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident. The period for the reporting referred to in the first subparagraph shall take account of the severity of the serious incident (Art 73(2)).

Notwithstanding Art 73(2))of this Art, in the event of a widespread infringement or a serious incident as defined in Art 3(49)(b), the report referred to in Art 73(1) must be provided immediately, and not later than two days after the provider or, where applicable, the deployer becomes aware of that incident (Art 73(3)).

EU database for high-risk AI systems listed in Annex III (Art 71)

The Commission must, in collaboration with the Member States, set up and maintain an EU database containing information referred to in Arts 71(2) and Art 71(3) concerning high-risk AI systems referred to in Art 6(2) which are registered in accordance with Art 49 and Art 60and AI systems that are not considered as high-risk under Art 6(3) and which are registered in accordance with Art 6(4) and Art 49. When setting the functional specifications of such database, the Commission shall consult the relevant experts, and when updating the functional specifications of such database, the Commission shall consult the Board (Art 71(1)).

The data listed in Sections A and B of Annex VIII must be entered into the EU database by the provider or, where applicable, by the authorised representative (Art 71(2)).

The data listed in Section C of Annex VIII shall be entered into the EU database by the deployer who is, or who acts on behalf of, a public authority, agency or body, in accordance with Arts 49(3) and (4)(Art 71(3)).

With the exception of the section referred to in Art 49(4) and Art 60(4)(c), the information contained in the EU database registered in accordance with Art 49 must be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Art 60 (Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes) must be accessible only to market surveillance authorities andthe Commission, unless the prospective provider or provider has given consent for also making the information accessible the public (Art 71(4)).

The EU database must contain personal data only in so far as necessary for collecting and processing information in accordance with this EU AI Act. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider or the deployer, as applicable (Art 71(5)).

The Commission will be the controller of the EU database. It will make available to providers, prospective providers and deployers adequate technical and administrative support. The EU database shall comply with the applicable accessibility requirements (Art 71(6)).

Procedure at national level for dealing with AI systems presenting a risk (Art 79)

AI systems presenting a risk will be understood as a ‘product presenting a risk’ as defined in Art 3, point 19 of Regulation (EU) 2019/1020 (Market Surveillance Regulation), in so far as they present risks to the health or safety, or to fundamental rights, of persons(Art 79(1)).

Where the market surveillance authority of a Member State has sufficient reason to consider an AI system to present a risk as referred to in Art 79(1)of this Article, it shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this EU AI Act. Particular attention shall be given to AI systems presenting a risk to vulnerable groups. Where risks to fundamental rights are identified, the market surveillance authority shall also inform and fully cooperate with the relevant national public authorities or bodies referred to in Art 77(1). The relevant operators shall cooperate as necessary with the market surveillance authority and with the other national public authorities or bodies referred to in Art 77(1). Where, in the course of that evaluation, the market surveillance authority or, where applicable the market surveillance authority in cooperation with the national public authority referred to in Art 77(1), finds that the AI system does not comply with the requirements and obligations laid down in this EU AI Act, it shall without undue delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a period the market surveillance authority may prescribe, and in any event within the shorter of 15 working days, or as provided for in the relevant Union harmonisation legislation. The market surveillance authority shall inform the relevant notified body accordingly. Article 18 of Regulation (EU) 2019/1020 will apply to the measures referred to in the second subparagraph of this paragraph (Art 79(2)).

Where the market surveillance authority of a Member State has sufficient reason to consider an AI system to present a risk as referred to in Art 79(1), it shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this EU AI Act. Particular attention shall be given to AI systems presenting a risk to vulnerable groups. Where risks to fundamental rights are identified, the market surveillance authority shall also inform and fully cooperate with the relevant national public authorities or bodies referred to in Art 77(1). The relevant operators shall cooperate as necessary with the market surveillance authority and with the other national public authorities or bodies referred to in Art 77(1). Where, in the course of that evaluation, the market surveillance authority or, where applicable the market surveillance authority in cooperation with the national public authority referred to in Art 77(1), finds that the AI system does not comply with the requirements and obligations laid down in this EU AI Act, it must without undue delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a period the market surveillance authority may prescribe, and in any event within the shorter of 15 working days, or as provided for in the relevant Union harmonisation legislation. The market surveillance authority shall inform the relevant notified body accordingly. Art 18 of Regulation (EU) 2019/1020 shall apply to the measures referred to in the second subparagraph of this paragraph (Art 79(2)).

Where the market surveillance authority considers that the non-compliance is not restricted to its national territory, it must inform the Commission and the other Member States without undue delay of the results of the evaluation and of the actions which it has required the operator to take (Art 79(3)).

The operator shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market (Art 79(4)).

Where the operator of an AI system does not take adequate corrective action within the period referred to in Art 79(2)) the market surveillance authority shall take all appropriate provisional measures to prohibit or restrict the AI system’s being made available on its national market or put into service, to withdraw the product or the standalone AI system from that market or to recall it. That authority shall without undue delay notify the Commission and the other Member States of those measures (Art 79(5)).

The notification referred to in Art 79(5) shall include all available details, in particular the information necessary for the identification of the non-compliant AI system, the origin of the AI system and the supply chain, the nature of the non-compliance alleged and the risk involved, the nature and duration of the national measures taken and the arguments put forward by the relevant operator. In particular, the market surveillance authorities shall indicate whether the non-compliance is due to one or more of the following:

  1. non-compliance with the prohibition of the AI practices referred to in Art 5;

  2. a failure of a high-risk AI system to meet requirements set out in Chapter III, Section 2

  3. shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 conferring a presumption of conformity;

  4. non-compliance with Article 50 (Art 79(6)).

The market surveillance authorities other than the market surveillance authority of the Member State initiating the procedure shall, without undue delay, inform the Commission and the other Member States of any measures adopted and of any additional information at their disposal relating to the non-compliance of the AI system concerned, and, in the event of disagreement with the notified national measure, of their objections (Art 79(7)).

Where, within three months of receipt of the notification referred to in Art 79(5), no objection has been raised by either a market surveillance authority of a Member State or by the Commission in respect of a provisional measure taken by a market surveillance authority of another Member State, that measure shall be deemed justified. This shall be without prejudice to the procedural rights of the concerned operator in accordance with Art 18 of Regulation (EU) 2019/1020 (Market Surveillance Regulation). The three-month period referred to in this paragraph shall be reduced to 30 days in the event of non-compliance with the prohibition of the AI practices referred to in Art 5 of this EU AI Act (Art 79(8)).

The market surveillance authorities shall ensure that appropriate restrictive measures are taken in respect of the product or the AI system concerned, such as withdrawal of the product or the AI system from their market, without undue delay (Art 79(9)).

Procedure for dealing with AI systems classified by the provider as non-high-risk in application of Annex III (Art 80)

Where a market surveillance authority has sufficient reason to consider that an AI system classified by the provider as non-high-risk pursuant to Art 6(3) is indeed high-risk, the market surveillance authority shall carry out an evaluation of the AI system concerned in respect of its classification as a high-risk AI system based on the conditions set out in Art 6(3) and the Commission guidelines (Art 80(I)).

Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into compliance with the requirements and obligations laid down in this EU AI Act, as well as take appropriate corrective action within a period the market surveillance authority may prescribe (Art 80(2)).

Where the market surveillance authority considers that the use of the AI system concerned is not restricted to its national territory, it must inform the Commission and the other Member States without undue delay of the results of the evaluation and of the actions which it has required the provider to take (Art 80(3)).

The provider must ensure that all necessary action is taken to bring the AI system into compliance with the requirements and obligations laid down in this Regulation. Where the provider of an AI system concerned does not bring the AI system into compliance with those requirements and obligations within the period referred to in Art 80(2), the provider shall be subject to fines in accordance with Art 99 (Art 80(4)).

The provider must ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market (Art 80(5)).

Where the provider of the AI system concerned does not take adequate corrective action within the period referred to in Art 80(2)), Art 79(5) to (9) shall apply (Art 80(6)).

Where, in the course of the evaluation pursuant to Art 80(1)), the market surveillance authority establishes that the AI system was misclassified by the provider as non-high-risk in order to circumvent the application of requirements in Chapter III, Section 2, the provider shall be subject to fines in accordance with Art 99 (Art 80(7)).

In exercising their power to monitor the application of this Article, and in accordance with Art 11 of Regulation (EU) 2019/1020 (Market Surveillance Regulation), market surveillance authorities may perform appropriate checks, taking into account in particular information stored in the EU database referred to in Art 71 of this Regulation (Art 80(8)).

Union safeguard procedure (Art 81)

Where, within three months of receipt of the notification referred to in Art 79(5), or within 30 days in the case of non-compliance with the prohibition of the AI practices referred to in Art 5, objections are raised by the market surveillance authority of a Member State to a measure taken by another market surveillance authority, or where the Commission considers the measure to be contrary to Union law, the Commission shall without undue delay enter into consultation with the market surveillance authority of the relevant Member State and the operator or operators, and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall, within six months, or within 60 days in the case of non-compliance with the prohibition of the AI practices referred to in Art 5, starting from the notification referred to in Article 79(5), decide whether the national measure is justified and shall notify its decision to the market surveillance authority of the Member State concerned. The Commission shall also inform all other market surveillance authorities of its decision (Art 81(1)).

Where the Commission considers the measure taken by the relevant Member State to be justified, all Member States shall ensure that they take appropriate restrictive measures in respect of the AI system concerned, such as requiring the withdrawal of the AI system from their market without undue delay, and shall inform the Commission accordingly. Where the Commission considers the national measure to be unjustified, the Member State concerned shall withdraw the measure and shall inform the Commission accordingly (Art 81(2)).

Where the national measure is considered justified and the non-compliance of the AI system is attributed to shortcomings in the harmonised standards or common specifications referred to in Art 40 and 41 of this EU AI Act, the Commission shall apply the procedure provided for in Article 11 of Regulation (EU) No 1025/2012 (Art 81(3)).

Compliant AI systems which present a risk (Art 82)

Where, having performed an evaluation under Art 79, after consulting the relevant national public authority referred to in Art 77(1), the market surveillance authority of a Member State finds that although a high-risk AI system complies with the EU AI Act, it nevertheless presents a risk to the health or safety of persons, to fundamental rights, or to other aspects of public interest protection, it must require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk without undue delay, within a period it may prescribe (Art 82(1)).

The provider or other relevant operator must ensure that corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market within the timeline prescribed by the market surveillance authority of the Member State referred to in Art 82(1) (Art 82(2)).

The Member States shall immediately inform the Commission and the other Member States of a finding under Art 82(1).  That information shall include all available details, in particular the data necessary for the identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and duration of the national measures taken (Art 82(3)).

The Commission shall without undue delay enter into consultation with the Member States concerned and the relevant operators, and shall evaluate the national measures taken. On the basis of the results of that evaluation, the Commission shall decide whether the measure is justified and, where necessary, propose other appropriate measures (Art 82(4)).

 The Commission must immediately communicate its decision to the Member States concerned and to the relevant operators. It shall also inform the other Member States (Art 82(5)).

Formal non-compliance (Art 83)

Where the market surveillance authority of a Member State makes one of the following findings, it must require the relevant provider to put an end to the non-compliance concerned, within a period it may prescribe:

  1. the CE marking has been affixed in violation of Art 48;
  2. the CE marking has not been affixed;
  3. the EU declaration of conformity referred to in Art 47 has not been drawn up;
  4. the EU declaration of conformity referred to in Article 47 has not been drawn up correctly;
  5. the registration in the EU database referred to in Art 71 has not been carried out;
  6. where applicable, no authorised representative has been appointed;
  7. technical documentation is not available (Art 83(2)).

Where the non-compliance referred to in Art 83(1) persists, the market surveillance authority of the Member State concerned shall take appropriate and proportionate measures to restrict or prohibit the high-risk AI system being made available on the market or to ensure that it is recalled or withdrawn from the market without delay (Art 83(2)).

Union AI testing support structures (Art 84)

The Commission must designate one or more Union AI testing support structures to perform the tasks listed under Article 21(6) of Regulation (EU) 2019/1020 (Market Surveillance Regulation) in the area of AI (Art 84(1)).

Without prejudice to the tasks referred to in Art 84(1)), Union AI testing support structures shall also provide independent technical or scientific advice at the request of the Board, the Commission, or of market surveillance authorities (Art 84(2)).

Right to lodge a complaint with a market surveillance authority (Art 85)

Without prejudice to other administrative or judicial remedies, any natural or legal person having grounds to consider that there has been an infringement of the provisions of this Regulation may submit complaints to the relevant market surveillance authority. In accordance with Regulation (EU) 2019/1020 (Market Surveillance Regulation), such complaints shall be taken into account for the purpose of conducting market surveillance activities, and shall be handled in line with the dedicated procedures established therefor by the market surveillance authorities.

Right to explanation of individual decision-making (Art 86)

Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III, with the exception of systems listed under point 2 thereof, and which produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken (Art 86(1)).

Art 86(1) shall not apply to the use of AI systems for which exceptions from, or restrictions to, the obligation under that paragraph follow from Union or national law in compliance with Union law (Art 86(2)).

This Article shall apply only to the extent that the right referred to in Art 86(1) is not otherwise provided for under Union law (Art 86(3)).

Reporting of infringements and protection of reporting persons (Art 87)

Directive (EU) 2019/1937 shall apply to the reporting of infringements of this EU AI Act and the protection of persons reporting such infringements.

Enforcement of the obligations of providers of general-purpose AI models (Art 88)

The Commission will have exclusive powers to supervise and enforce Chapter V (General-purpose AI models), taking into account the procedural guarantees under Art 94 (Article 94 Procedural rights of economic operators of the general-purpose AI model). The Commission must entrust the implementation of these tasks to the AI Office, without prejudice to the powers of organisation of the Commission and the division of competences between Member States and the Union based on the Treaties.

Without prejudice to Art 75(3), market surveillance authorities may request the Commission to exercise the powers laid down in this Section (Section 5 on Supervision, Investigation, Enforcement and Monitoring in respect of providers of general purpose AI models) where that is necessary and proportionate to assist with the fulfilment of their tasks under the EU AI Act.

Monitoring actions (Art 89)

For the purpose of carrying out the tasks assigned to it under this Section (Section 5 on Supervision, Investigation, Enforcement and Monitoring in respect of providers of general purpose AI models), the AI Office may take the necessary actions to monitor the effective implementation and compliance with this EU AI Act by providers of general-purpose AI models, including their adherence to approved codes of practice (Art 89(1)).

Downstream providers must have the right to lodge a complaint alleging an infringement of this EU AI Act. A complaint must be duly reasoned and indicate at least:

  1. the point of contact of the provider of the general-purpose AI model concerned;

  2. a description of the relevant facts, the provisions of this EU AI Act concerned, and the reason why the downstream provider considers that the provider of the general-purpose AI model concerned infringed this EU AI Act;

  3. any other information that the downstream provider that sent the request considers relevant, including, where appropriate, information gathered on its own initiative (Art 89(2)).

Alerts of systemic risks by the scientific panel (Art 90)

The scientific panel may provide a qualified alert to the AI Office where it has reason to suspect that: (a) a general-purpose AI model poses concrete identifiable risk at Union level; or, (b) a general-purpose AI model meets the conditions referred to in Article 51 (Art 90(1)).

Upon such qualified alert, the Commission, through the AI Office and after having informed the Board, may exercise the powers laid down in this Section for the purpose of assessing the matter. The AI Office shall inform the Board of any measure according to Articles 91 to 94 (Art 90(2)).

A qualified alert shall be duly reasoned and indicate at least: (a) the point of contact of the provider of the general-purpose AI model with systemic risk concerned; (b) a description of the relevant facts and the reasons for the alert by the scientific panel; (c) any other information that the scientific panel considers to be relevant, including, where appropriate, information gathered on its own initiative (Art 90(3)).

Power to request documentation and information (Art 91)

The Commission may request the provider of the general-purpose AI model concerned to provide the documentation drawn up by the provider in accordance with Art 53 and Art 55, or any additional information that is necessary for the purpose of assessing compliance of the provider with this EU AI Act (Art 91(1)).

Before sending the request for information, the AI Office may initiate a structured dialogue with the provider of the general-purpose AI model (Art 91(2)).

Upon a duly substantiated request from the scientific panel, the Commission may issue a request for information to a provider of a general-purpose AI model, where the access to information is necessary and proportionate for the fulfilment of the tasks of the scientific panel under Art 68(2) (Art 91(3)).

The request for information must state the legal basis and the purpose of the request, specify what information is required, set a period within which the information is to be provided, and indicate the fines provided for in Article 101 for supplying incorrect, incomplete or misleading information (Art 91(4)).

The provider of the general-purpose AI model concerned, or its representative must supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall supply the information requested on behalf of the provider of the general-purpose AI model concerned. Lawyers duly authorised to act may supply information on behalf of their clients. The clients shall nevertheless remain fully responsible if the information supplied is incomplete, incorrect or misleading (Art 91(5)).

Power to conduct evaluations (Art 92)

The AI Office, after consulting the Board, may conduct evaluations of the general-purpose AI model concerned:

  1. to assess compliance of the provider with obligations under the EU AI Act, where the information gathered pursuant to Art 91 is insufficient; or,

  2. to investigate systemic risks at Union level of general-purpose AI models with systemic risk, in particular following a qualified alert from the scientific panel in accordance with Article 90(1)(a) (Art 92(1)).

The Commission may decide to appoint independent experts to carry out evaluations on its behalf, including from the scientific panel established pursuant to Article 68. Independent experts appointed for this task must meet the criteria outlined in Article 68(2) (Art 92(2)).

For the purposes of Art 92(1), the Commission may request access to the general-purpose AI model concerned through APIs or further appropriate technical means and tools, including source code (Art 92(3)).

The request for access must state the legal basis, the purpose and reasons of the request and set the period within which the access is to be provided, and the fines provided for in Article 101 for failure to provide access (Art 92(4)).

The providers of the general-purpose AI model concerned or its representative must supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, must provide the access requested on behalf of the provider of the general-purpose AI model concerned (Art 92(5)).

The Commission must adopt implementing acts setting out the detailed arrangements and the conditions for the evaluations, including the detailed arrangements for involving independent experts, and the procedure for the selection thereof. Those implementing acts must be adopted in accordance with the examination procedure referred to in Art 98(2) (Art 92(6)).

Prior to requesting access to the general-purpose AI model concerned, the AI Office may initiate a structured dialogue with the provider of the general-purpose AI model to gather more information on the internal testing of the model, internal safeguards for preventing systemic risks, and other internal procedures and measures the provider has taken to mitigate such risks (Art 92(7)).  

Power to request measures (Art 93)

Where necessary and appropriate, the Commission may request providers to:

  1. take appropriate measures to comply with the obligations set out in Arts 53 and 54;

  2. implement mitigation measures, where the evaluation carried out in accordance with Art 92 has given rise to serious and substantiated concern of a systemic risk at Union level; (c) restrict the making available on the market, withdraw or recall the model (Art 93(1)).

Before a measure is requested, the AI Office may initiate a structured dialogue with the provider of the general-purpose AI model (Art 93(2)).

If, during the structured dialogue referred to in Art 93(2), the provider of the general-purpose AI model with systemic risk offers commitments to implement mitigation measures to address a systemic risk at Union level, the Commission may, by decision, make those commitments binding and declare that there are no further grounds for action (Art 93(3)).

Guidelines from the Commission on the implementation of this EU AI Act (Art 96)

The Commission must develop guidelines on the practical implementation of this EU AI Act, and in particular on:

  1. the application of the requirements and obligations referred to in Art 8 to 15 and in Art 25;

  2. the prohibited practices referred to in Article 5;

  3. the practical implementation of the provisions related to substantial modification;

  4. the practical implementation of transparency obligations laid down in Art 50; (Art 96(1)).

Committee procedure (Art 98)

The Commission will be assisted by a committee. That committee shall be a committee within the meaning of EU AI Act (EU) No 182/2011.

Where reference is made to this paragraph, Art 5 of Regulation (EU) No 182/2011 shall apply.

AI Office means the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and general-purpose AI models, and AI governance, provided for in Commission Decision of 24 January 2024; references in this EU AI Act to the AI Office shall be construed as references to the Commission.

Article 99 (Penalties)

Member States must lay down the rules on penalties and other enforcement measures, which may also include warnings and non-monetary measures, applicable to infringements by operators, taking into account the guidelines issued by the Commission under Art 96. The penalties provided for must be effective, proportionate and dissuasive. They must take into account the interests of SMEs, including start-ups, and their economic viability (Art 99(1) extracts).

Member States must, by the date of entry into application, notify the Commission of the rules on penalties and of other enforcement measures referred to in Art 99(1), and must notify it, without delay, of any subsequent amendment to them (Art 99(2) extracts).

Non-compliance with Art 5 must be subject to administrative fines of up to 35, 000. 000 EUR or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher (Art 99(3) extracts).

Non-compliance with any of the following provisions related to operators or notified bodies, other than those laid down in Art 5, must be subject to administrative fines of up to 15, 000.000 EUR or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher:

  1. obligations of providers under Art 16;

  2. obligations of authorised representatives under Art 22;

  3. obligations of importers under Art 23;

  4. obligations of distributors under Art 24;

  5. obligations of deployers pursuant to Art 26;

  6. requirements and obligations of notified bodies pursuant to Art 31, Art 33(1), (3) and (4) or Art 34;

  7. transparency obligations for providers and deployers pursuant to Art 50 (Art 99(4) extracts) (Art 99(4) extracts).

The supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request must be subject to administrative fines of up to 7, 500.000 EUR or, if the offender is an undertaking, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher (Art 99(5) extracts).

In the case of SMEs, including start-ups, each fine referred to in this Article must be up to the percentages or amount referred to in paragraphs 3, 4 and 5, whichever thereof is lower (Art 99(6)).

When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation must be taken into account and, as appropriate, regard must be given to the matters set out in Art 99(7)(a) to (j) (Art 99(7).)

Each Member State must lay down rules on to what extent administrative fines may be imposed on public authorities and bodies established in that Member State (Art 99(8)).

Depending on the legal system of the Member States, the rules on administrative fines may be applied in such a manner that the fines are imposed by competent national courts or by other bodies, as applicable in those Member States. The application of such rules in those Member States must have an equivalent effect (Art 99(9)).

The exercise of powers under this Article must be subject to appropriate procedural safeguards in accordance with Union and national law, including effective judicial remedies and due process (Art 99(10)).

Member States must, on an annual basis, report to the Commission about the administrative fines they have issued during that year, in accordance with this Article, and about any related litigation or judicial proceedings (Art 99(11)).

Fines for providers of general-purpose AI models (Art 101)

Non-compliance by a party subject to the EU AI Act risks significant fines. Similar to the EU GDPR, fines are capped at a percentage of global annual turnover in the previous financial year or a fixed amount (Art 101(1)).

The Commission may impose on providers of general-purpose AI models fines not exceeding 3% of their annual total worldwide turnover in the preceding financial year or EUR 15, 000.000, whichever is higher, when the Commission finds that the provider intentionally or negligently:

  1. infringed the relevant provisions of the EU AI Act;

  2. failed to comply with a request for a document or for information pursuant to Art 91, or supplied incorrect, incomplete or misleading information;

  3. failed to comply with a measure requested under Art 93;

  4. failed to make available to the Commission access to the general-purpose AI model or general-purpose AI model with systemic risk with a view to conducting an evaluation pursuant to Art 92 (Art 101(2)).

In fixing the amount of the fine or periodic penalty payment, regard shall be had to the nature, gravity and duration of the infringement, taking due account of the principles of proportionality and appropriateness. The Commission shall also into account commitments made in accordance with Article 93(3) or made in relevant codes of practice in accordance with Art 56 (Art 101(3)).

Before adopting the decision pursuant to Art 101(1), the Commission shall communicate its preliminary findings to the provider of the general-purpose AI model and give it an opportunity to be heard (Art 101(2)).

Fines imposed in accordance with this Article must be effective, proportionate and dissuasive (Art 101(3)).

Information on fines imposed under this Article must also be communicated to the Board as appropriate (Art 101(4)).

The Court of Justice of the European Union shall have unlimited jurisdiction to review decisions of the Commission fixing a fine under this Article. It may cancel, reduce or increase the fine imposed (Art 101(5)).

Application of the EU AI Act (Art 111)

AI systems already placed on the market or put into service and general-purpose AI models already placed on the marked (Art 111).

Without prejudice to the application of Art 5 as referred to in Art 113(3)(a), AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex X (Union legislative acts on large-scale IT systems in the area of Freedom, Security and Justice) that have been placed on the market or put into service before 36 months from the date of entry into force of this EU AI Act must be brought into compliance with this EU AI Act by 31 December 2030 (Art 111(1)).

The requirements laid down in this EU AI Act will be taken into account in the evaluation of each large-scale IT system established by the legal acts listed in Annex X (Union legislative acts on large-scale IT systems in the area of Freedom, Security and Justice) to be undertaken as provided for in those legal acts and where those legal acts are replaced or amended (Art 111(2)).

Without prejudice to the application of Art 5 as referred to in Art 113(3)(a), this EU AI Act will apply to operators of high-risk AI systems, other than the systems referred to in Art 111(1), that have been placed on the market or put into service before 24 months from the date of entry into force of this EU AI Act only if, as from that date, those systems are subject to significant changes in their designs.

In any case, the providers and deployers of high-risk AI systems intended to be used by public authorities must take the necessary steps to comply with the requirements and obligations of this EU AI Act by six years from the date of entry into force of this EU AI Act.

providers of general-purpose AI models that have been placed on the market before 12 months from the date of entry into force of this EU AI Act must take the necessary steps in order to comply with the obligations laid down in this EU AI Act by 36 months from the date of entry into force of this EU AI Act (Art 111(3)).

Entry into force and application (Art 113) 

The EU AI Act will enter into force on the twentieth day following that of its publication in the Official Journal of the European Union. It will apply from 24 months from the date of entry into force of the EU AI Act.

However:

  1. Chapters I and II shall apply from six months from the date of entry into force of this EU AI Act;

  2. Chapter III Section 4, Chapter V, Chapter VII and Chapter XII and Art 78 shall apply from 12 months from the date of entry into force of this EU AI Act, with the exception of Art 101.

Conformity assessment - what is it and what is required by the AI Act

Conformity assessment means the process of verifying whether the requirements set out in Chapter 2 of the EU AI Act relating to an AI system have been fulfilled (Art 3(20)).

Harmonised standards and standardisation deliverables (Art 40)

High-risk AI systems or general-purpose AI models which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 as amended by Regulation (EU) 2022/2480 ( on European standardisation) shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, with the obligations set out in of Chapter V, Sections 2 and 3, of this EU AI Act to the extent that those standards cover those requirements or obligations (Art 40(1)).

In accordance with Art 10 of Regulation (EU) (No) 1025/2012 as amended by Regulation (EU) 2022/2480 (on European standardisation), the Commission must issue, without undue delay, standardisation requests covering all requirements set out in Section 2 of this Chapter and, as applicable, standardisation requests covering obligations set out in Chapter V, Sections 2 and 3, of this EU AI Act.

The standardisation request shall also ask for deliverables on reporting and documentation processes to improve AI systems’ resource performance, such as reducing the high-risk AI system’s consumption of energy and of other resources during its lifecycle, and on the energy-efficient development of general-purpose AI models.

When preparing a standardisation request, the Commission must consult the Board and relevant stakeholders, including the advisory forum. The Commission must request the European standardisation organisations to provide evidence of their best efforts to fulfil the objectives referred to in the first and the second subparagraph of this paragraph in accordance with Art 24 of Regulation (EU) No 1025/2012(Art 40(2) extract).

The participants in the standardisation process must seek to promote investment and innovation in AI, by amongst other things the effective participation of all relevant stakeholders in accordance with Articles 5, 6, and 7 of Regulation (EU) No 1025/2012 as amended by Regulation (EU) 2022/2480 (on European standardisation)  (Art 40(3) extract).

Common specifications (Art 41)

The Commission may adopt, implementing acts establishing common specifications for the requirements set out in Section 2 of this Chapter or, as applicable, for the obligations set out in Sections 2 and 3 of Chapter V where the following conditions have been fulfilled:

  1. the Commission has requested, pursuant to Art 10(1) of Regulation (EU) No 1025/2012 as amended by Regulation (EU) 2022/2480 (on European standardisation), one or more European standardisation organisations to draft a harmonised standard for the requirements set out in Section 2 of this Chapter, or, as applicable, for the obligations set out in Sections 2 and 3 of Chapter V, and:

    (i) the request has not been accepted by any of the European standardisation organisations; or


    (ii) the harmonised standards addressing that request are not delivered within the deadline set in accordance with Article 10(1) of Regulation (EU) No 1025/2012 as amended by Regulation (EU) 2022/2480 (on European standardisation); or


    (iii) the relevant harmonised standards insufficiently address fundamental rights concerns; or


    (iv) the harmonised standards do not comply with the request; and

  2. no reference to harmonised standards covering the requirements referred to in Section 2 of this Chapter or, as applicable, the obligations referred to in Sections 2 and 3 of Chapter V has been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012, and no such reference is expected to be published within a reasonable period.

When drafting the common specifications, the Commission must consult the advisory forum referred to in Art 67. The implementing acts must be adopted in accordance with the examination procedure referred to in Article 98(2) (Art 41(1) extract).

Before preparing a draft implementing act, the Commission must inform the committee referred to in Art 22 of Regulation (EU) No 1025/2012 as amended by Regulation (EU) 2022/2480 (on European standardisation) that it considers the conditions laid down in Art 41(1) to be fulfilled (Art 41(2)).

High-risk AI systems or general-purpose AI models which are in conformity with the common specifications referred to in Art 40(1), or parts of those specifications, shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, to comply with the obligations referred to in Sections 2 and 3 of Chapter V, to the extent those common specifications cover those requirements or those obligations (Art 41(3)).

Where a harmonised standard is adopted by a European standardisation organisation and proposed to the Commission for the publication of its reference in the Official Journal of the European Union, the Commission must assess the harmonised standard in accordance with Regulation (EU) No 1025/2012 (Art 41(4) extract).

Where providers of high-risk AI systems or general-purpose AI models do not comply with the common specifications referred to in Art 40(1), they must duly justify that they have adopted technical solutions that meet the requirements referred to in Section 2 of this Chapter or, as applicable, comply with the obligations set out in Sections 2 and 3 of Chapter V to a level at least equivalent thereto (Art 41(5)).

Where a Member State considers that a common specification does not entirely meet the requirements set out in Section 2 or, as applicable, comply with obligations set out in Sections 2 and 3 of Chapter V, it must inform the Commission thereof with a detailed explanation (Art 41(6) extract).

Presumption of conformity with certain requirements (Art 42)

High-risk AI systems that have been trained and tested on data reflecting the specific geographical, behavioural, contextual or functional setting within which they are intended to be used will be presumed to comply with the relevant requirements laid down in Art 10(4) (Art 42(1)).

High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 ( the Cybersecurity Act) and the references of which have been published in the Official Journal of the European Union shall be presumed to comply with the cybersecurity requirements set out in Art 15 of this EU AI Act in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements (Art 42(2)).

Conformity assessment (Art 43)

For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Art 40, or, where applicable, common specifications referred to in Art 41, the provider shall opt for one of the following conformity assessment procedures based on:

  1. (a) the internal control referred to in Annex VI; or

  2. (b) the assessment of the quality management system and the assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.

In demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider shall follow the conformity assessment procedure set out in Annex VII where:

  1. harmonised standards referred to in Art 40 do not exist, and common specifications referred to in Art 41 are not available;

  2. the provider has not applied, or has applied only part of, the harmonised standard;

  3. the common specifications referred to in point (a) exist, but the provider has not applied them;

  4. one or more of the harmonised standards referred to in point (a) has been published with a restriction, and only on the part of the standard that was restricted.

  5. For the purposes of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, where the high-risk AI system is intended to be put into service by law enforcement, immigration or asylum authorities or by Union institutions, bodies, offices or agencies, the market surveillance authority referred to in Art 74(8) or (9), as applicable, shall act as a notified body (Art 43(1)).

For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body (Art 43(2)).

For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the provider shall follow the relevant conformity assessment procedure as required under those legal acts. The requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply. For the purposes of that assessment, notified bodies which have been notified under those legal acts shall be entitled to control the conformity of the high-risk AI systems with the requirements set out in Section 2, provided that the compliance of those notified bodies with requirements laid down in Art 31(4), (5), (10) and (11) has been assessed in the context of the notification procedure under those legal acts (Art 43(3)).

Where a legal act listed in Section A of Annex I enables the product manufacturer to opt out from a third-party conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, that manufacturer may use that option only if it has also applied harmonised standards or, where applicable, common specifications referred to in Art 41, covering all requirements set out in Section 2 of this Chapter (Art 43(4)).

High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure in the event of a substantial modification, regardless of whether the modified system is intended to be further distributed or continues to be used by the current deployer. For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification (Art 43(4)).

The Commission is empowered to adopt delegated acts in accordance with Art 97 in order to amend Annexes VI and VII by updating them in light of technical progress (Art 43(5)).

The Commission is empowered to adopt delegated acts in accordance with Art 97 in order to amend paragraphs 1 and 2 of this Article in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission must adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimising the risks to health and safety and protection of fundamental rights posed by such systems, as well as the availability of adequate capacities and resources among notified bodies (Art 43(6)).

Certificates (Art 44)

Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in a language which can be easily understood by the relevant authorities in the Member State in which the notified body is established (Art 44(1)).

Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid (Art 44(2)).

Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision. An appeal procedure against decisions of the notified bodies, including on conformity certificates issued, shall be available (Art 44(3)).

Information obligations of notified bodies (Art 45)

Notified bodies must inform the notifying authority of the following:

  1. any Union technical documentation assessment certificates, any supplements to those certificates, and any quality management system approvals issued in accordance with the requirements of Annex VII;

  2. any refusal, restriction, suspension or withdrawal of a Union technical documentation assessment certificate or a quality management system approval issued in accordance with the requirements of Annex VII;

  3. any circumstances affecting the scope of or conditions for notification;

  4. any request for information which they have received from market surveillance authorities regarding conformity assessment activities;

  5. (e) on request, conformity assessment activities performed within the scope of their notification and any other activity performed, including cross-border activities and subcontracting ((Art 45(1)).

Each notified body shall inform the other notified bodies of:

  1. (a) quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality system approvals which it has issued;

  2. (b) Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued (Art 45(2)).

Each notified body shall provide the other notified bodies carrying out similar conformity assessment activities covering the same types of AI systems with relevant information on issues relating to negative and, on request, positive conformity assessment results (Art 45(3)).

Notified bodies shall safeguard the confidentiality of the information that they obtain, in accordance with Article 78 (Art 45(4)).

Derogation from conformity assessment procedure (Art 46)

By way of derogation from Art 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay (Art 46(1)).

In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to inArt 46(1), provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in Art 46(1)is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded (Art 46(2)).

The authorisation referred to in Art 46(1)shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to Art 40(1) and Art 40(2). This obligation shall not cover sensitive operational data in relation to the activities of law-enforcement authorities Art 40(3) (Art 46(3)).

Where, within 15 calendar days of receipt of the information referred to in Art 46(3), no objection has been raised by either a Member State or the Commission in respect of an authorisation issued by a market surveillance authority of a Member State in accordance withArt 46(1),  that authorisation shall be deemed justified Art 40(4) (Art 46(4)).

Where, within 15 calendar days of receipt of the notification referred to in Art 40(5) objections are raised by a Member State against an authorisation issued by a market surveillance authority of another Member State, or where the Commission considers the authorisation to be contrary to Union law, or the conclusion of the Member States regarding the compliance of the system as referred to in 40(3) to be unfounded, the Commission shall, without delay, enter into consultations with the relevant Member State. The operators concerned shall be consulted and have the possibility to present their views. Having regard thereto, the Commission shall decide whether the authorisation is justified. The Commission shall address its decision to the Member State concerned and to the relevant operators (Art 46(5)).

Where the Commission considers the authorisation unjustified, it shall be withdrawn by the market surveillance authority of the Member State concerned (Art 46(6)).

For high-risk AI systems related to products covered by Union harmonisation legislation listed in Section A of Annex I, only the derogations from the conformity assessment established in that Union harmonisation legislation shall apply (Art 46(7)).

EU declaration of conformity (Art 47)

The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request (Art 47(1)).

The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available (Art 47(2)).

Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation legislation to which the declaration relates (Art 47(3)).

By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate.

The Commission is empowered to adopt delegated acts in accordance with Art 97 in order to amend Annex V by updating the content of the EU declaration of conformity set out in that Annex, in order to introduce elements that become necessary in light of technical progress (Art 47(4)).

CE marking (Art 48)

The CE marking shall be subject to the general principles set out in Art 30 of Regulation (EC) No 765/2008 (Art 48(1)).

For high-risk AI systems provided digitally, a digital CE marking shall be used, only if it can easily be accessed via the interface from which that system is accessed or via an easily accessible machine-readable code or other electronic means (Art 48(2)).

The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate (Art 48(3).

Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking (Art 48(4)).

Where high-risk AI systems are subject to other Union law which also provides for the affixing of the CE marking, the CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other law (Art 48(5)).


Paul Foley Law provides advice and drafting (including on required due diligence, regulatory advice on AI purpose determination and compliance, contracts, licences, policies provision, appointment of authorised representatives, impact on data protection) related to the, procurement, import, deployment (internally and externally), transfer of deployer role to other operators, distribution of general purpose AI systems whether with or without systemic risk.
Contact: paul@paulfoleylaw.ie

Full copyright policy HERE >
map-markerenvelopetagarrow-left linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram