On April 21, 2021, the European Commission (the ‘Commission’) published its first proposal on a legislative framework on Artificial Intelligence (AI) that aims at promoting the development of AI and address the potential high risks it poses to safety and fundamental rights equally.
The proposal follows the numerous calls for legislative action from the European Parliament and the European Council ‘to ensure a well-functioning internal market for artificial intelligence systems (‘AI systems’) where both benefits and risks of AI are adequately addressed at Union level’.
In the recital, the Commission outlined the benefits of the use of artificial intelligence technologies, in high-impact sectors, including climate change, environment and health, the public sector, finance but also stressed that these same technologies can bring about new risks or negative consequences for individuals and the society.
With this proposal the Commission objectives are to:
Ensure that AI systems placed on the European Union market and used are safe and respect existing law on fundamental rights and values;
Ensure legal certainty to facilitate investment and innovation in AI;
Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
To achieve those objectives, the Commission indicated that the proposal includes the principle-based requirements that AI systems should comply with that does not hinder trade and innovation. It also includes a proposed single future-proof definition of AI.
Some of the key provisions laid down in the proposal are as follows:
The regulation will apply, inter alia, to:
Providers placing on the market or putting into service AI systems in the European Union, irrespective of whether those providers are established within the Union or in a third country;
Users of AI systems located within the Union;
Providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union.
With ‘provider’ being defined as ‘a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge’.
‘Users’ being defined as ‘any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.’
And ‘‘artificial intelligence system’ (AI system) being defined as a ‘software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.’
Prohibited AI practices
Which encompasses the placing on the market, putting into service or use of an:
AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.
AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.
AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics.
Classification of AI systems as high risk
An AI system is to be considered high-risk where both of the following conditions are met:
The AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II of the proposal;
The product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.
Annex III of the proposal includes a list of AI systems that should also be considered high-risk such as those focused on biometric identification and categorisation of natural persons, on management and operation of critical infrastructure or education and vocational training.
Requirements for high-risks AI systems
AI systems should comply with the proposal requirements relating to risk management system, data & data governance, technical documentation, transparency and provision of information to users, accuracy, robustness and cybersecurity, record keeping and so forth.
Obligations of providers and users of high-risk AI systems and other parties
In addition to the compliance requirements applicable, the proposal comprises, among others, requirements for providers of high-risks AI systems to have in place quality management system, to keep the logs automatically generated by their high-risk AI systems where applicable. Extensive obligations will also be applicable to manufacturers, importers, distributors and users of AI systems.
The proposal still needs to through the legislative process before being adopted. if it becomes law, it will have major impacts on all targeted market players in the AI field.