On 21 April, the European Commission presented its “Artificial Intelligence Act”, a proposal for a regulation laying down harmonised rules on AI (1).

The legislative proposal follows up on Commission President Ursula von der Leyen’s announcement of a new “legislation for a coordinated European approach on the human and the ethical implications of AI”, that she presented to the European Parliament in September 2019 as part of her political guidelines (2).

The European approach towards AI aims to put ethics and the respect of European values at its centre. This we know at the latest since the 2018 Communication on AI for Europe (3), the beginning of the work of the High-Level Expert Group on AI Ethics Guidelines (4) and the adoption of the Commission’s White paper on AI (5) last year.

At the same time, the benefits and opportunities of AI for Europe’s industry and society have been largely recognised. For instance, the important advantages of machine learning technologies in health and disease diagnoses as well as for the circular economy and the fight against climate change cannot be ignored.
The benefit for our companies in terms of competitiveness, and for our users in terms of security, will depend on the uptake of AI in the EU as European industry will work towards making such technologies safe and secure. Therefore, it is essential to strike the right balance between building trust and supporting innovation.

For a European perspective towards AI, the European Commission proposed to regulate the placing on the market and putting into service of AI systems. The primary objective being to ensure safety and respect of fundamental rights and EU values in the development and use of AI. In this respect, the Commission opted for a horizontal approach to prevent fragmentation in the internal market for AI. In turn, the Artificial Intelligence Act would offer legal certainty for businesses.

The proposal adopts a risk-based approach, regulating concrete use cases rather than the technology as such. In this logic, providers and users of AI systems will need to identify the intended purpose of the AI system to determine whether the new rules apply. In addition, the proposed regulation strives to maintain a level playing field by obliging providers established in a third country to comply with its new rules, as long as they offer an AI system on the internal market. To this end, they must have a “legal representative” in at least one EU Member State. Third country users of AI systems – meaning those who have authority over the AI system, except for non-professional purposes, – will be subject to the Artificial Intelligence Act as far as their use of an AI system affects individuals in the EU. However, these terms remain to be proven effective in terms of enforcement.

More particularly, AI systems posing an “unacceptable risk” will be simply and plainly prohibited. According to the Commission, certain AI systems fundamentally contradict EU values and must therefore be banned:

  1. Manipulation of human behaviour in a manner that causes or is likely to cause physical or psychological harm;
  2. Exploitation of vulnerabilities of a specific group of persons;
  3. Social scoring used by public authorities; and
  4. Real-time remote biometric identification in public spaces for the purpose of law enforcement (with precise exceptions).

AI systems qualifying as “high-risk” will be subject to specific requirements (6). The proposed definition of these high-risk AI systems seems rather broad as it includes both,

  • AI systems intended to be used as a safety component of a product or harmonisation legislation (7) and which already undergo third-party conformity assessment and
  • Stand-alone practices such as biometric identification, categorization of natural persons, management and operation of critical infrastructures (e.g. road traffic), employment (e.g. used for recruitment), amongst others.

Some other AI systems that interact with humans will be subject to transparency requirements. Lastly, codes of conduct should foster the voluntary application of the requirements for those AI systems that are not covered by the proposed regulation.

Finally, to support innovation, the European Commission proposes “regulatory sandboxes” to “provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service” (8). SMEs and start-up’s that qualify for participation in the testing facility should be granted a priority access. This could, for example, allow the processing of personal data in the public interest.

The legislative train on AI has started but it has not yet arrived. The Commission’s proposal will now be dealt with by the co-legislators, the European Parliament and Council. The implementation will tell if the Artificial Intelligence Act reaches the necessary equilibrium between trust and excellence for a rapid, broad, and successful uptake of AI in the EU.

Angela Lo Mauro
Conseillère affaires européennes auprès de la FEDIL