Oct 2022

European Union

Introduction

Some say the EU approach to regulating artificial intelligence is in search of a “third way”, between a purely market-driven absence of ex ante regulation, and rigid, State-controlled, regulation. Time will tell if the EU finds this suitable “third way” to navigate the challenges of international trade. What we know at this stage is that the regulation of AI in Europe, though still in its infancy and incomplete, will deeply influence the market and business activities of those who develop, make available or use AI systems in Europe. 

The EU approach to regulating AI recognizes both the many potential benefits of artificial intelligence, such as better healthcare, safer and cleaner transport, energy efficiency, etc., as well as the potential risks and harms AI can bring with it. Therefore, the EU’s primary aim is to ensure that artificial intelligence systems are or remain “trustworthy”. In other words, that they are socially acceptable, such that businesses are encouraged to develop and deploy these technologies while citizens embrace and use them with confidence. However, AI is first and foremost a technical artifact that relies on the combination of data and software. The regulation of AI therefore must not only focus on safeguarding fundamental values; it needs to address the market conditions under which data can be made accessible and/or reusable. In this chapter, we will:

  • introduce the EU approach to data and data sharing; 
  • describe how the GDPR currently regulates some aspects of AI systems and (partly) automated decision-making; and
  • discuss the draft AI Act, its status and potential implications for the future. 

1. EU Data Landscape

The EU's ambition to regulate data is based on the recognition that it is a key factor of production in the digital economy. As a result, the EU wants to promote data sharing as much as possible, and as a policy orientation it requires that data remain findable, accessible, interoperable and re-usable (often referred to as the FAIR acronym). We will briefly illustrate this in the light of various recent pieces of EU legislation. 

1.1 Data sharing

Over the years, the EU adopted several instruments to promote and foster data sharing by public sectors. The latest stages in this process are the Open Data Directive and the Data Governance Act (DGA) (Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act)), which complements the former in respect of data sets that contain personal data, include trade secrets or proprietary information, and for that reason fall outside the "open data" requirement (the idea that data must be made available for re-use both for commercial and non-commercial purposes). In order to reap the benefits of data sharing, the DGA specifies how data can be shared in spite of such limitations, ensuring an effective protection of third parties' rights. The DGA also creates additional sources of data sharing. Public sector bodies, data intermediaries and data altruism organisations are recognized and held accountable to specific rules of independence and transparency, in the hope that they contribute to building a stronger market for data exchange. 

Whether the source of the data is a public sector body, a data intermediary or a data altruism organisation, the DGA lays down the same fundamental rule: sharing data sets and safeguarding personal data or intellectual property rights and trade secrets must go hand in hand. To achieve that purpose in practice, the DGA prescribes a number of duties when sharing: 

  • Data must be shared on a fair, transparent, non-discriminatory, and (as a rule) non-exclusive basis. 
  • Data recipients are accountable and must commit to respecting IP and trade secrets as well as data protection laws, implement anonymisation or other protections against disclosure, pass on contractual obligations to their counterparts involved in the data sharing, facilitate the exercise of rights by data subjects, and so on.
  • A secure processing environment must be implemented to ensure these principles are abided by. Interestingly for AI, even the mere “calculation of derivative data through computational algorithms”, qualifies as a use that requires such a secure processing environment to be put in place. 

The DGA refers to modern techniques for preserving privacy, such as anonymisation, differential privacy, randomisation, etc. Such techniques may be further defined or extended, or made explicitly mandatory, through implementing legislation. 

As a result, in practical terms, the DGA can be seen as an additional layer to the GDPR, and a foundation for the set-up of future “European Data Spaces”, which will be laid down by future sectoral legislation and will define the characteristics and requirements of sectoral data exchange systems. As an example in the field of digital health records and re-use of health data for research purposes see the Proposal for a Regulation of the European Parliament and the Council on the European Health Data Space, COM/2022/197 final (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52022PC0197).

1.2 Data ownership

As regards data held by private businesses, the rules mentioned above do not apply. Typically, data collected or exchanged in the context of connected products is held by manufacturers or users of such products. Under the current legal framework, those manufacturers or users are at liberty to keep such data secret or to make it available for the purposes and upon the terms they deem fit. Admittedly, some limitations apply. For instance, the new text and data mining exception in respect of copyrighted works and databases (articles 3 and 4 of the Directive 2019/790 of the European Parliament and the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/CE and 2001/29/CE). Or, if for such data sharing they would work together with a data intermediary qualifying under the DGA, businesses would need to take into account the conditions and contractual terms that said DGA imposes. However, the traditional view remains that data is proprietary to whoever has it in its possession.   

The proposed European Commission Proposal for a Regulation of the European Parliament and of the Council on harmonised rules on fair access to and use of data (Data Act), COM(2022) 68 final, 2022/0047(COD) (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN)) is an attempt to fundamentally change that approach, and impose access to data from such connected products under FRAND-type licenses: so-called “data holders” would have the obligation to provide information about, and grant access to, data generated by their products, upon request of users or third parties authorized by said users, subject to contractual terms that the proposed Data Act requires to be fair, reasonable, non-discriminatory and transparent. 

The hope is that such increased access to data sets will foster innovation, and it will likely bring about major shifts on the market including for the development of AI systems. Obviously, the granting of access to data can have significant implications for the protection of a company's trade secrets and proprietary information. The proposed Data Act states that rules about trade secrets must remain unaffected, and disclosure should be limited to the extent strictly necessary and subject to confidentiality measures and commitments. In practice, this will require businesses to be even more prepared to defend and prevent dissemination of their trade secrets, anticipating as much as possible the occurrence of an access request. At the time of writing, it is not possible to advise what the exact scope of the Data Act will be, what exact types of use cases it will regulate and how. Nor is it possible to forecast whether the text will improve accessibility of data in a manner that is useful or consistent with the specific needs of those making or developing AI systems. But it is definitely a text that is worth keeping an eye on.

1.3 Platforms

One particular area of concern for the EU lawmaker, is the ability of some tech companies to control massive amounts of data and leverage the same to influence market behaviour and fairness of exchanges in general. Hence, the EU attempted to rein in so-called “big tech” companies and to make digital markets more competitive. These efforts culminated with the adoption of the Digital Markets Act and the Digital Services Act. Both were adopted in spring 2022 but the final texts haven’t been published yet. In short, the DMA sets out detailed actions that entities with a certain market power (“gatekeepers”) must or must not take, provided that such gatekeepers will be designated by the European Commission. Whilst the DMA applies to “tier 1” gatekeepers, the DSA takes a more horizontal approach and imposes content moderation obligations upon online platforms and other providers of digital services. These obligations will be more granular than under the current e-Commerce directive, and will make platforms and service providers accountable for much more transparency, risk monitoring, effective take-down procedures, responsiveness towards end-users, etc. 

The DMA and DSA shift from an ex post to an ex ante regulatory approach to create more competition and they will result in a significant compliance burden for businesses. With respect to data, the current draft DMA notably limits the ability to combine data sets and increase portability rights for end-users. The current draft DSA imposes in particular an increased transparency around algorithms used for recommendation or for profiling purposes. 

It is to be noted that the enforcement powers for both instruments lie in the hands of the European Commission, rather than being decentralized as is the case with supervisory authorities under the GDPR. Another significant aspect for companies is the need to embed compliance in the very design of their systems, products and services. In other words, where companies tend to automate the delivery of products of services, they must also ensure that the software or AI systems that underpin such automation, are designed in a way that they can live up to the expectations of the regulators. 

2. (Automated) Decision-making and the GDPR

Initially, European data protection law was meant to deal with the challenges of public sector databases and the aggregation of information about citizens on computer mainframes. Then it evolved as a right to self-determination and included more and more aspects to tackle the use of data by private businesses. Nowadays the fundamental right to the protection of personal data is enshrined in the EU Charter and the GDPR grants individuals with significant rights and powers to control not only the collection and use of personal data, but also further operations such as profiling or conducting analytics, combining with other data sets for new purposes, etc. According to Recital 4 of the GDPR, “the processing of personal data should be designed to serve mankind”, and there are little to no aspects of the lifecycle of personal data that are left uncaught by its provisions. In addition, the notion of “personal data” has an extremely broad definition and scope of application, such that individuals are and remain protected in respect of any information that not only directly relates to them, but also has the potential of impacting various aspects of their lives. Under the GDPR, personal data refers to any information that relates to an identified or identifiable individual. By "identifiable", the GDPR means that the individual "can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person". The standard of whether an individual is "identifiable" has been set by the European Court of Justice in Breyer (ECJ, 19 Oct. 2016, C-582/14), judging that in order to ascertain whether the person is identifiable, "account should be taken of all the means likely reasonably to be used either by the controller or by any other person to identify the said person".

Therefore, it is reasonable to state that the GDPR already offers a strong potential regulatory framework for AI systems that process personal data, in the sense that it regulates to a large extent decisions that are taken or outputs that are produced as a result of computing or analysing such personal data. Some argue that the existing data protection framework must be improved and cannot be applied in respect of so-called Big data analytics, where the need to re-use massive sets of data for previously unknown purposes and for goals that are and remain partly undefined, seems at odds with classical data protection principles such as purpose-limitation and data minimization (see “Big data and data protection” by A. Mantelero, in G. Gonzalez Fuster, R. Van Brakel, P. De Hert (eds.), “Research Handbook on Privacy and Data Protection Law. Values, Norms and Global Politics”, Edward Elgar, 2022, 335-357). However, in spite of these theoretical arguments, we see in practice that courts and data protection authorities currently use and apply the GDPR provisions to assess algorithmic processes and the use of personal data to support (or replace) decision-making processes. This can be seen in respect of the general principles and rules of the GDPR, as well as of the specific provision on automated decision-making (article 22). We look at those two aspects in turn. 

2.1 GDPR Principles

It must be borne in mind that the GDPR essentially requires companies to anticipate, assess and manage risks and harms that the processing of personal data entails for rights and freedoms of individuals (aka "data subjects"). Given that AI systems have a clear ability to interfere with many fundamental rights, and heavily rely on the processing of personal data, the GDPR clearly springs to mind as one of the key regulatory layers for the development, use and deployment of AI systems within the European Union, beyond the specific realm of automated decision-making that we analyse in the next section below. It is useful to briefly highlight some aspects of its content that can impact developers and users of AI systems in practice.

First of all, any "high risk" data processing system must be subjected to an impact assessment that describes the potential harms for rights and freedoms of individuals, as well as the measures meant to address and mitigate those risks. This assessment exercise is iterative, and there are situations in which the results must be shared with regulators before any processing occurs.

Second, the risks that may result from the processing of personal data are pretty varied in nature: according to Recital 75 of the GDPR, they can materialize as physical, material or non-material damage, and may include situations as diverse as discrimination, identity theft or fraud, financial loss, reputational damage, loss of confidentiality, unauthorized reversal of pseudonymization, but also "any other significant economic or social disadvantage" or the deprivation of rights and freedoms. One of the particular categories of risks include profiling. Under the GDPR, "profiling" is defined as "any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements". As one can see from that definition, any use of personal data to support decision-making is likely to fall into the notion of profiling.

Third, each of these specific risks must be assessed and factored into a management and risk mitigation exercise, coupled with the implementation of appropriate technical and organizational measures to ensure that the provisions of the GDPR are complied with. In addition to such measures, profiling and all other types of processing operations, must comply with fundamental principles such as the need for a legal basis, the requirements of accuracy and data minimization, fairness, general transparency and non-discrimination. It follows that every discrete data processing operation involved in an AI system must be tested or challenged on the basis of these rules and principles.

On that basis, several screening or risk assessment systems have already been found to qualify as profiling and to breach important GDPR provisions: an online screening application for gun licenses to assess psychological state, a tax fraud risk assessment system, an online tool for automated evaluation of job seekers' chances to find an employment, or even the creation of commercial profiles of customers, for instance. In those cases, courts or data protection authorities have prohibited the continuation of the processing operations, imposed an increased burden of transparency or mandated the disclosure of explanations about the logic of the profiling or about the rationale for a decision, in order to enable verification of the accuracy and lawfulness of the personal data handled. In some of these cases, the issue at hand was essentially that the processing carried out by a public authority or a government agency, was not sufficiently "prescribed by law", in other words that the lawmaker should have provided a sound legal basis for it, with an appropriate democratic debate taking place to define with enough granularity the particulars of what is allowed and what is not, and what the possible means of redress could be, etc. However, in some other cases, the courts or data protection authorities went after practices of private businesses, in areas such as employment, workforce management or recruitment, credit-worthiness, marketing and fraud detection, where algorithms and artificial intelligence systems were developed or used to support the decision-making process.

2.2 Automated decision-making 

In contrast to the situations described above, a decision can be made purely on the basis of automated processing, in the sense that there is no human intervention in the process (so called "automated decision making" or ADM). 

The GDPR devotes a specific provision to such situations, where a decision is taken 'solely' on the basis of automated processing, including profiling, provided that it 'produces legal effects' or 'similarly significantly affects' individuals. The only situations where such automated decision-making is allowed, are when it is:

  • necessary for the conclusion or the performance of a contract; 
  • authorized by a law of the European Union or of a Member State; or 
  • based on the explicit consent of the data subject. 

In any event, suitable measures must be taken to safeguard fundamental rights, including at least the right to obtain meaningful human intervention on the part of the data controller, to express one's point of view and/or to contest the decision. Lastly, article 15 GDPR provides that the data subject may request access to information as to whether automated decision making is in place or not, and to obtain 'meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject'. It is unclear whether this duty to disclose information about the logic and consequences of the decision-making applies exclusively to qualifying automated decision-making, or extends to other cases where profiling or automated processing is part of the decision-making process.

According to article 22 GDPR, as noted above, individuals have a "right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her". The exact threshold for this provision to apply is still debated, as is the exact meaning of a "right not to be subject to" automated decision-making: is it a general prohibition – meaning that governments and companies can only deploy ADM if all conditions under the legal limited exceptions are fulfilled, or does it comprise a limited right for data subjects to "opt-out" of an automated decision-making process – in which case the essential requirement is for those systems to be designed and set-up in such a way that a (human) decision can still be taken whenever a data subject objects to the automated processing? According to EU data protection regulators, article 22 provides for a prohibition, but the authority to interpret the GDPR lies with the European Court of Justice (ECJ). It is entirely possible that this question could be determined by the ECJ in a different way as pending cases will soon require it to analyse the rules on automated decision-making in the context of credit scoring.

Whatever the exact scope of article 22 GDPR turns out to be in the end, the GDPR and its provisions on automated decision-making and profiling do at least to some extent regulate the deployment and use of AI systems to the extent they can have an impact for individuals' lives. Although certainly imperfect, the GDPR has the potential of bringing with it severe prohibitions and substantial fines, together with potential claims for damages. Therefore, in practice, businesses and companies that develop AI systems for use within the European Union must carefully think about how they can mitigate those potential adverse consequences. It appears that they must set for themselves: 

  • a degree of internal preparedness and organization to ensure substantial and meaningful human involvement, including organisational rules for decision preparation or review, training for employees; 
  • a degree of transparency towards end users and governments as concerns the constitutive elements of the decision-making process, including the specific factors and parameters that are utilised and how these could be possibly altered or adapted; and 
  • avoiding immutability as to the actual consequences of machine-based decisions for individuals, to ensure that the effects for fundamental rights can either be mitigated, undone or at least explained and justified.

3. AI Act (draft)

Following a long process and building upon a series of policy documents, on 21 April 2021, the European Commission tabled a proposal for a Regulation “laying down harmonised rules on artificial intelligence” (Artificial Intelligence Act) and “amending certain Union legislative acts” (the “Proposal”), (European Commission, 21 April 2021, COM (2021) 206 final, 2021/0106 (COD) at www.europarl.europa.eu/RegData/ docs_autres_institutions/commission_europeenne /com/2021/0206/COM_COM(2021)0206_EN.pdf).

As the title suggests, the draft AI Act is a rather technical piece of legislation, with more than 80 articles and no less than 89 recitals. It aims at harmonising rules within the internal market, as much as it tends to foster innovation while safeguarding public interests and fundamental rights. Despite these ambitious goals, the Proposal remains a specific legislative instrument rather than an all-encompassing, comprehensive regulation covering all legal aspects of AI. It must be read together, with other current or future EU laws that ambition to notably regulate access to and use of (personal) data, competition in digital markets, safety regulation of machinery and even product liability. 

The Proposal is currently going through the various stages of the complex EU legislative process: both the Council and the European Parliament, the two co-legislators, are discussing the text and finalizing their own position. The Council is likely to reach a final position by the end of 2022 or the beginning of 2023, whereas the European Parliament is expected to hold a plenary session and is unlikely to adopt a final position until the first part of 2023 at the earliest (interestingly, the draft report of the joint lead committees (Internal Market and Civil Liberties) includes more than 3000 amendments). 

Subsequent to these processes, a final, behind-the-scenes, political discussion will eventually lead to the adoption of the AI Act. It is not possible at this stage to forecast how and to what extent the Commission’s Proposal is going to be amended. Hence, we limit ourselves to highlighting the fundamental issues and the potential challenges for businesses and organisations. We briefly present the scope and structure of the Proposal (we disregard the suggested amendments tabled by the Council and the Parliament so far), and then turn to the key reasons for criticism.  

3.1 Scope and Structure

Notions and scope of application 

Perhaps surprisingly, the Proposal defines artificial intelligence and AI systems (both seem to belong to the same general concept) in such an open-ended and broad way that it could virtually cover any type of computer program. It therefore catches automation processes where pre-programmed rules are defined based on knowledge or logic-based reasoning, as well as “learning systems” that use large data sets to detect patterns and make predictions, for instance. It is questionable whether such a broad definition serves the risk-based approach that underpins the Proposal. Many of the perceived risks of harm, bias and discrimination, opacity, etc., derive specifically from technical characteristics of certain AI systems, and are not, or less, present in the case of other, logic-based systems. As a result, the Proposal might not only overregulate AI systems to some extent, but also fail to address specific threats for fundamental rights that derive from the use of “black box” systems, machine learning and others that rely heavily on data sets to make inferences or predictions and support decision-making processes. 

The Proposal primarily covers so-called “providers of AI systems”, defined as those that develop or have developed an AI system with a view to placing it on the market or putting it into service under their own name or trademark. There are also obligations for users, distributors or importers of AI systems but to a lesser degree. 

The Proposal applies to AI systems that are put into service, placed on the market or used “within the Union”, irrespective of the place of establishment of such providers, and it also applies where the output produced by the AI system is “used in the Union”. As such, the Proposal seems not to fully acknowledge two important dimensions of the current AI market. First, because of, or thanks to, cloud computing technology, the various components of an AI system can be provided from several countries or places, the specific location of which tend to be difficult to determine. Most importantly, AI capabilities can be offered as a service to business users who do not want or cannot afford to build the entire chain of necessary components themselves, and opt for using those built by larger players (see notably “Artificial Intelligence as a Service: Legal Responsibilities, Liabilities and Policy Challenges”, by J. Cobbe, J. Singh in Computer Law & Security Review at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3824736). In such cases, the delineation of responsibilities for providers and users of such MLOps or AIaaS offerings, needs to be considered carefully. In the current state of the Proposal, it is largely unclear whether customers of such products would qualify as “providers”, whilst it is almost certain that because of their lack of knowledge or insight into the technology they would be unable to comply with requirements like risk management and quality management systems, technical documentation, transparency, human oversight, etc.  

Risk-based approach, risk layers and key applicable obligations 

The focus of the Proposal is on regulating the placing on the market, making available and use of a variety of AI systems, distinguishing between categories of risks those systems entail. There are four levels of risk, defined by the likelihood that an AI system may cause harm to health, safety or fundamental rights of a specific individual. Where the risk is deemed unacceptable, covered AI systems are prohibited, though the Proposal carves-out certain types of uses or offers a limited flexibility. Where the risk qualifies as “high”, AI systems must meet certain essential requirements and undergo a conformity assessment procedure. In respect of limited-risk AI systems, the Proposal only imposes transparency rules, in order to let users know they are interacting with an AI system. Lastly, AI systems that pose only minimal risk are subject to voluntary codes of conduct. 

The Proposal contains a limited set of categories of prohibited practices: 

  • manipulative systems that distort an individual’s behaviour in a harmful manner; 
  • social scoring systems used by public authorities and leading to detrimental treatment; and 
  • real-time biometric identification systems that are used for law enforcement purposes. 

The current text leaves much uncertainty as to the exact scope of application of these prohibited practices, as well as to the limitations and carve-outs. Whether these prohibitions will bring additional value, as compared to what existing laws on consumer protection, data protection or anti-discrimination can achieve, remains to be seen. 

As many commentators have underlined, particularly with respect to high-risk AI systems, the Proposal relies heavily on the legislative and regulatory framework around product safety. In order to address AI related risks, the Proposal’s conceptual approach is to define only a limited set of essential requirements, thereby leaving the possibility for further applicable rules to be adopted as technical standards. It tasks providers of AI systems with the carrying out of a conformity assessment procedure. For that reason, many argue the Proposal does not provide for an effective protection of fundamental rights and fails to confer individuals harmed by AI systems any meaningful means of redress.

(See “The European Commission’s Proposal for an Artificial Intelligence Act – A Critical Assessment by Members of the Robotics and AI Law Society (RAILS)”, by M. Ebers, V.R.S. Hoch, F. Rosenkranz, H. Ruschemeier & B. Steinrötter, Multidisciplinary Scientific Journal, 2021, 4, 589-603; “Demystifying the Draft EU Artificial Intelligence Act”, by M. Veale & F. Zuiderveen Borgesius, Computer Law Review International, 2021/4, 97-112; and “How the EU can achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act”, by N. Smuha, E. Ahmed-Rengers, A. Harkens, W. Li, J. MacLaren, R. Piselli & K. Yeung, Leads LAB @University of Birmingham, 5 August 2021.)

The threshold for qualifying as a high-risk AI system can be twofold: either the AI system is a product, or a safety component of said product, that is covered by certain health and safety EU harmonisation legislation (as set out in Annex II) and is required to undergo a third-party conformity assessment, or the AI system is referred to in Annex III of the Proposal (which in its current state includes eight fixed areas and several corresponding examples or use cases but could be expanded or revised during the lawmaking process). In practice, high-risk AI systems will be found in all areas like machinery, lifts, toys, medical devices, vehicles, and so on (by reference to legislation listed in Annex II), or will be: 

  • systems used for various purposes in the areas of biometric identification and categorisation; 
  • management and operation of critical infrastructure (safety components in road traffic operation, or in water, gas or energy supply); 
  • education and vocational training (students' assessments, determining access, etc.); 
  • employment, worker management and access to self-employment (recruitment and selection, screening, evaluations, task allocation, performance assessment, etc.); 
  • access to and enjoyment of essential private or public services and benefits (eligibility, creditworthiness, etc.); 
  • law enforcement; 
  • migration, asylum and border management; or 
  • administration of justice and democracy.

Providers of high-risk AI systems must demonstrate that the latter comply with essential requirements forming the heart of title III of the Proposal. They must create and implement a risk management system that identifies potential harms and residual risks, fosters adequate design and development, testing, and is updated throughout the lifetime of an AI system. Training, validation and testing data sets used to train AI systems must be relevant, representative, free of errors and complete, and be specific enough to their application. Design, data collection, annotation, labelling, cleaning, enrichment and aggregation (for which the Proposal creates an exemption for the processing of special categories of data under the GDPR), must be relevant and appropriate; examination of possible biases, identification of gaps and shortcomings, also form part of the same mandatory “data governance and management practices”. As a result, AI systems must achieve an appropriate level of accuracy, robustness and cybersecurity. They must include appropriate interfaces or tools that enable effective human oversight, including for instance a “four eyes” principle for biometric identification systems. Providers must also draw up detailed technical documentation that demonstrate compliance with the essential requirements; and they must provide authorities with the information needed to assess the compliance of their AI systems (including technical documentation and data sets). Annex IV of the Proposal, which defines the minimal technical documentation, requires notably to document the general logic of the system and the algorithms, the training methodologies or techniques, the data sets used, etc. Providers of AI systems must also implement record-keeping or logging tools to enable traceability throughout the lifecycle. In addition, providers of AI systems are subject to several obligations to ensure compliance with these essential requirements. They must perform a conformity assessment, which can be done by themselves or with the involvement of a notified body, depending on the use cases. Passing the conformity assessments will mean that AI systems are presumed to be in conformity with the essential requirements laid down in the AI Act, possibly supplemented by additional technical harmonisation standards. 

Finally, in respect of minimal-risk AI systems, the Proposal only imposes a transparency requirement, in order to ensure individuals are made fully aware that they are engaging with an AI system, typically a conversational agent (“bot”). Equally, the Proposal targets emotion recognition, or biometric categorisation systems, for which the users must be informed of the operation of the system in place. In addition, the Proposal mandates disclosure of so-called “deep fakes” technology in use to generate or manipulate image, audio or video content. 

3.2 Mapping the Debate 

As said above, the Proposal has attracted a considerable amount of debate and discussion. Hence, it comes as no surprise that many, from businesses and civil rights organisations to legal scholars and industry representative bodies, have expressed their criticisms. It is not possible to discuss in detail every aspect of these discussions but generally they illustrate the need for the AI Act to (better) address three key challenges: 

  • the safeguard of fundamental rights;
  • the practical costs and feasibility of compliance with the AI Act requirements; and 
  • the need for legal certainty in combining the AI Act with other legislation. 

These concerns, at least to some extent, have also inspired the positions adopted by the Council and several committees of the Parliament. Hopefully they will lead to an improvement of the Proposal. 

Fundamental rights

Whoever purports to lay down rules about AI needs a “moral compass” to define what goal the regulation should achieve. Many argue that human rights, deeply rooted in constitutional traditions of EU Members States, and considered universal values with a strong international and supranational institutional enforcement system, constitute an appropriate “objective moral compass towards securing ‘good AI’” (see "Beyond a Human Rights-Based Approach to AI Governance: Promises, Pitfalls, Plea", by Smuha, N., Philos. Technol. 34 (Suppl. 1), 91-104 (2021), available at https://doi.org/10.1007/s13347-020-00403-w). Against this background, before adopting the Proposal, the Commission set up an independent High-Level Expert Group on Artificial Intelligence, tasked with drafting ethical guidelines for businesses and organisations (see https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai), and policy recommendations for regulators. The Expert Group identified human rights as the cornerstone of defining a normative framework towards a trustworthy AI, one than can not only be compliant with an existing regulatory framework but also inspires trust because the values on which it builds work as the benchmark to assess both the adoption of rules and the practical impact of AI systems for end-users. 

The Commission’s strategy papers endorsed the ethical guidelines, but many voiced concerns that the Proposal fails to really embed fundamental rights into the proposed rules. It does not give fundamental rights any form of pre-eminence in the assessment of AI systems and treats them as only one of the many interests that must be balanced with one another. Also, the Proposal’s approach equates human rights to a set of technical requirements, as if defining and imposing standards was enough to ensure effective and meaningful protection of human rights. Another major aspect of the criticism is that the Proposal does not afford individuals with any set of specific, enforceable rights to claim they are affected by AI systems in an unacceptable or harmful way (as at the date of publication of this guide it appears that this criticism may be addressed by a legislative measure to accompany the AI Act, a so called AI Liability Directive (AILD), which may enable individual rights of action, however this has yet to be released by the EU). Put simply, a regulation that safeguards fundamental rights must take account of the actual use cases and the actual impacts of AI systems in real-life situations. Arguably, the choice of regulating AI systems through standardization and conformity assessments, could well lead to insufficient safeguards for human rights, a poor enforcement framework and a lack of democratic participation (see "Standardizing AI. The Case of the European Commission’s Proposal for an ‘Artificial Intelligence Act", by Ebers M., in DiMatteo L. A., Poncibo C., Cannarsa M., (eds.), "The Cambridge Handbook of Artificial Intelligence. Global Perspectives on Law and Ethics, Cambridge University Press", 2022, 321-344; “How the EU can achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act”, by N. Smuha, E. Ahmed-Rengers, A. Harkens, W. Li, J. MacLaren, R. Piselli & K. Yeung, Leads LAB @University of Birmingham, 5 August 2021, available at https://ssrn.com/abstract=3899991). 

Based on the same idea that fundamental rights are foundational to AI regulation, many also argue that the practices which the Proposal claims to be “unacceptable risks”, are defined poorly and crippled with many loose carve-outs and exceptions that could soon make massive surveillance, social scoring and deceptive practices a reality rather than a strictly prohibited practice (see https://edpb.europa.eu/system/files/2021-06/edpb-edps_joint_opinion_ai_regulation_en.pdf). It is to be hoped that the Proposal will be improved in that regard. 

Practical challenges  

Many argue that the Proposal insufficiently defines the detailed substance of the rules it wants to have AI systems providers comply with, and that it overly relies on rulemaking through technical standards adopted by European Standards Organisations (ESO’s). There are at least two dimensions to this critique. First, the Proposal sometimes even fails to define the threshold or the level of quality or performance that it purports to impose. For instance, AI systems must be “accurate, robust and cybersecure” (art. 15), training data must be “free of errors and complete” (art. 10), or AI systems must be “transparent and interpretable” (art.13). But what exactly those terms mean, is left pretty much undefined. Second, even where essential requirements are defined, it is in a rather broad way, and the ESO’s are tasked with further laying down the details of many aspects of the practical implementation of the AI Act requirements. It follows that to a large extent, the exact scope and substance of the future rules is still an unknown, and there is not even certainty that it will be possible at all to adopt relevant and appropriate technical standards in due time. Beside the issue of standard-setting, other areas of the Proposal require further clarification, such as the delineation of the different roles across the supply chain, or the extent to which the requirements can be fine-tuned to accommodate for factual circumstances such as modification, decommissioning or sunsetting, of AI systems, for instance. 

Several obligations for high-risk AI systems appear unworkable in practice. For instance, where the Proposal imposes human oversight, it does not define clearly at which stage of the process, nor how the required oversight must take place or with what consequences. Where it requires data sets to be appropriate and free from errors, it does not create a set of clear rules for the use of anonymous and pseudonymous data, nor does it define practical guidance as to how to achieve the required level of anonymization or pseudonymization. Another example is the obligation to make the validation, training and testing data sets available to market surveillance authorities, which won’t always be possible in practice and will often require to accommodate for additional rules such as data protection, rights in trade secrets or intellectual property rights.  

Lastly, many fear the burden of the upcoming compliance regime could be too onerous, given the multiple level of authorities and regulators and the complexity of some of the requirements. They warn of the potential anti-competitive effects the AI Act could have, in particular for small and medium-sized companies. Amongst the proposed improvements, it is argued that the AI Act should foster a culture of responsibility across the AI supply chain and ecosystem, rather than focus on “providers”, and make the compliance requirements more proportionate to the actual, specific circumstances of each use case. 

Combination with other legislative instruments 

Commentators notice that the Proposal creates a significant legal uncertainty in terms of how it interacts, or must be cumulated with, existing pieces of legislation or other areas of the law (this issue will be even more evident if the proposed AILD is adopted). 

First, the Proposal has a broad scope of application, and it certainly intends to provide a unitary system of rules on AI systems within the internal market. But it does not clearly state whether it sets a mandatory, fully harmonized set of rules, or whether Member States enjoy any margin of manoeuvre to further regulate, prohibit or impose additional obligations upon AI systems, be it in respect of the providers, makers, importers, or users. 

Second, the Proposal misses a clear articulation with different pieces of legislation governing the access to and use of data at large, such as the Open Data Directive (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32019L1024), the Data Governance Act (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R0868) and the newly introduced notions of data intermediaries or data altruism. 

Furthermore, the Proposal would also benefit from a statement that it does not affect existing data protection rules, including the EUDPR (Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC) and the LED (Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA). Strikingly, as outlined above in Section 2. (Automated) Decision-making and the GDPR, many of the GDPR provisions have the potential to remedy or prevent critical risks posed by AI systems, but the Proposal does not acknowledge this, nor does it overtly provide the GDPR any prominent status to act as a complementary, additional layer of regulation to prevent harms and risks that AI systems pose for fundamental rights. 

Lastly, the requirements for transparency and explainability, as well as the duty to make that information available for enforcement agencies and regulators, are likely to hamper innovation and deprive companies of competitive advantages if they are forced to disclose knowledge or technology that lies “behind” their AI systems. Many therefore call for a fairer balance to be struck with trade secrets protection, which in practice are a key instrument used to protect innovation in the field of AI (see also “Artificial Intelligence between Transparency and Secrecy: From the EC Whitepaper to the AIA and Beyond”, by G. Spina Ali and R. Yu, European Journal of Law and Technology, 2021/3, at https://ejlt.org/index.php/ejlt/article/view/754/1044).

Comparative Guide


Contributing Firm


EXPERT ANALYSIS

European Union

Benjamin Docquir

Chapters

Australia

Kit Lee
Philip Catania

Austria

Sonja Dürager

Belgium

Benjamin Docquir

Canada

Charles Morgan
Chuck Rothman
Dan Glover
Dominic Thérien
Ella Hantho
Erin Keogh
Francis Langlois
Grace Waschuk
Jennifer Choi
Jerry Lan
Jonathan Adessky
Karine Joizil
William Lim

Germany

Alexander Tribess

Hong Kong

Gabriela Kennedy
Grace Wong
Hannah Ha
Hong Tran
John Hickin
Joshua Woo
LJ Kwok

Ireland

David Cullen
Leo Moore

Italy

Enrico Fabrizi
Federico Ferrara
Gianluigi Marino

Netherlands

Astrid Sixma

Spain

Rafael García del Poyo

United Kingdom

Amy Moylett
David Cubitt
Joachim Piotrowski
John Buyers
Katherine Kirrage
Katrina Anderson
Lucy Paull
Tamara Quinn
Tom Sharpe

United States

Ai Leen Koh, Ph.D.
David V. Sanker, Ph.D

Powered by SimSage

Jobs from Nicholas Scott

3-6 PQE Corporate M&A Associate

Job location: London

Projects/Energy Associate

Job location: London

Popular Articles

Latest Articles

UK appeals court restores status quo when assessing patentability of AI inventions

3h

Post Office legal head takes ‘leave of absence’ amid Horizon IT probe

4h

Nominations now open for 2025 Women & Diversity in Law Awards

7h

Law firms navigate Microsoft outage and emphasise need for tech resilience

2d

Bird & Bird adds seven-lawyer finance team in Italy from Hogan Lovells

2d