The GDPR of AI? European Commission Publishes Proposal on AI Regulation
On 21 April 2021, the European Commission published the hotly anticipated Proposal for a Regulation on Artificial Intelligence (the “AI Regulation”). Described as the ‘first ever legal framework on AI’, the AI Regulation aims to guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.
There are many concepts in the AI Regulation which take direct inspiration from the General Data Protection Regulation (the “GDPR”), particularly the broad and extra-territorial scope; enforcement via administrative fines; and the establishment of a European Artificial Intelligence Board (“EAIB”).
Scope
The AI Regulation defines AI systems as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. This broad definition is intended to ‘future-proof’ the application of the AI Regulation.
Like the GDPR, the AI Regulation applies to a broad range of stakeholders and is extra-territorial in scope, including:
- Providers who place AI systems on the EU market or put them into service, irrespective of whether they are established within the EU or in a third country;
- Users of AI systems who are established in the EU; and
- Providers and users of AI systems established outside the EU where the output produced by the AI system is used in the EU.
In another nod towards GDPR concepts, where a provider is established outside of the EU and where an importer cannot be identified, the provider must appoint an authorised representative in the EU, who will be responsible for performing certain tasks mandated by the provider including dealing with competent national authorities.
Risk-based approach
The AI Regulation adopts a risk-based approach in addressing key issues around the use and governance of AI systems.
- Unacceptable Risk: AI systems considered to be a clear threat to safety, livelihoods and rights of people will be prohibited, including systems with significant potential to manipulate human behaviour to circumvent users’ free will; systems used for indiscriminate surveillance applied in a generalised manner; systems that allow ‘social scoring’ by governments; and the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (unless certain limited exceptions apply).
- High-risk: AI systems identified as high-risk include AI technology used in:
- Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
- Educational or vocational training, that may determine the access to education and professional course of someone's life (e.g. scoring of exams);
- Safety components of products (e.g. AI application in robot-assisted surgery);
- Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
- Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
- Law enforcement that may interfere with people's fundamental rights (e.g. evaluation of the reliability of evidence);
- Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
- Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).
High-risk AI systems will be subject to strict obligations before they can be proceed to the market, including:
- Adequate risk assessment and mitigation systems;
- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
- Logging of activity to ensure traceability of results;
- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
- Transparent adequate information to the user;
- Human oversight measures; and
- High level of robustness, security and accuracy.
Providers of high-risk AI systems will also be obliged to ensure that their systems undergo a ‘conformity assessment procedure’ prior to placing the system on the market to ensure compliance with the above-mentioned obligations.
- Low or minimal risk: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back. The AI Regulation allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The AI Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens' rights or safety.
Enforcement and Governance
The Commission proposes that the Market Surveillance Regulation (2019/1020) will apply to AI systems generally. In addition, Member States will be required to appoint national supervisory authorities to perform supervisory and enforcement roles in relation to AI Regulation. The Commission has also proposed the creation of the EAIB which will “facilitate a smooth, effective and harmonised implementation” of the AI Regulation and include representatives of the national supervisory authorities. This structure of national authorities and an overarching board at EU level is very similar to that of the European Data Protection Board, which carries out this function with respect to the GDPR.
Penalties for Infringement
When AI systems are put on the market or in use that do not respect the requirements of the AI Regulation, Member States will have to lay down effective, proportionate and dissuasive penalties, including administrative fines, in relation to infringements, strongly echoing similar provisions for breaches of the GDPR.
The AI Regulation sets out thresholds to be taken into account:
- Up to €30m or 6% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data governance;
- Up to €20m or 4% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Regulation;
- Up to €10m or 2% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request.
Next Steps
The AI Regulation is not yet in force. The European Parliament and the Member States will need to consider the Commission's proposals in the ordinary legislative procedure. Once adopted, the Regulation will be directly applicable across the EU. Organisations involved in or considering the use of AI systems that may be categorised as ‘high-risk’ or perhaps even ‘prohibited AI practices’ under the draft AI Regulation, in particular, will need to consider its implications for their operations and future plans and monitor the progress of the draft Regulation carefully.
Also contributed by Róisín Finn.
This document has been prepared by McCann FitzGerald LLP for general guidance only and should not be regarded as a substitute for professional advice. Such advice should always be taken before acting on any of the matters discussed.
Select how you would like to share using the options below