On 19 February 2020, the European Commission presented a new “Digital Package”, which builds upon existing communications and publications on Artificial Intelligence (AI) and presents new initiatives to further develop its uptake in the EU. As part of this package, the Commission presented a White paper on AI “A European approach to excellence and trust”.  

The present position paper constitutes FEDIL’s contribution to the Commission’s White paper on AI and the accompanying report on the safety and liability implications of AI, the Internet of Things (IoT) and robotics.  

CONTEXT 

The European Commission highlighted the importance of AI technologies, platforms and applications in its mid-term review of the Digital Single Market strategy published in May 2017, and on 25 May 2018 it published a Communication on AI for Europe, laying down the European approach for the first time: „ethical, secure and cutting-edge AI made in Europe“. The communication builds around three pillars: 

  • Increasing public and private investment to support the use and performance of AI; to support fundamental research; to establish AI Digital Innovation Hubs for testing; and to support the uptake of AI through different platforms. 
  • Preparing for socio-economic changes by supporting the development of the right digital skills and close the skills gaps. 
  • Last but not least, ensuring an ethical and legal framework through transparency, accountability and fairness as well as through guidance on liability and privacy rules. 

At the same time, Luxembourg showed its commitment to developing the technology in the Grand-Duchy. Together with 25 other European countries, it signed a Declaration of cooperation on AI on 10 April 2018. In May 2019, the government of Luxembourg published its national strategic vision for Artificial Intelligence, aligned with the Commission, with the ambition to be among the most advanced digital societies in the world, to build a data-driven and sustainable economy, and to foster human-centric AI development.  

In June 2018, the Commission also appointed 52 experts to a new High Level Group on AI (HLEGAI), representing academia, civil society and industry, and published Ethics Guidelines on Artificial Intelligence as well as Policy and Investment Recommendations in April 2019. 

Lastly, the European Commission presented its key digital initiatives for the next 5 years on 19 February 2020: the White paper on AI, accompanied by aReport on the safety and liability implications of AI, a Communication on « A European Strategy for Data » and a Communication on Shaping Europe’s Digital Future. 

General comments 

Commission President, Ursula von der Leyen, and many Member States are increasingly proclaiming the idea of “technological sovereignty”. From our point of view, this very concept should exist to create appropriate framework conditions that facilitate the development of the EU’s capabilities in strategic areas and encourage the development and use of new emerging technologies like AI. Our technology capabilities need to be strengthened by confronting our engineering competences with new internet technologies. A strong and innovative technological base is the precondition for businesses to compete globally. Yet, we still observe that actual research is not well translated in European market solutions. To bring research forward this way, Europe must lift its general entrepreneurial mindset, allowing for bold ideas and encourage for more testing facilities or regulatory sandboxes.

To succeed in the digital transition, we must firm up our technological capabilities while avoiding heavy regulatory burdens or protectionist measures that could harm long-term EU competitiveness. In this context, FEDIL closely followed the HLEGAI’s work on the Ethics Guidelines for Trustworthy AI and fully supports the importance of ethics in the application and use of AI. As long as the EU is able to strike the right balance between building trust and the constant need for innovation in AI, we truly believe that a “European approach” will be in favour of the EU’s competitiveness and help its businesses being first movers in new emerging technologies.

Given the ubiquitous nature of AI, full harmonisation will prevent fragmentation of the European internal market. Rather than individual actions, a collective effort will benefit the EU, its businesses and its society as a whole. The AI market is cross-border and global. Therefore, we fully agree with the Commission that it is positive to address these issues as much as possible at EU level, in order to avoid fragmentation of the EU’s Digital Single Market.

Specific comments 

I. THE DEFINITION OF AI 

In its White paper, without already defining AI, the Commission points at the inevitable reference of AI to “data” and “algorithms” as well as the fact that algorithms continue to learn during their lifecycle. It also explains that a definition won’t limit AI to software, as it is often embedded in hardware.

First of all, Luxembourg’s Industry would like to highlight that the definition used for the purpose of the White paper’s deliverables is of utmost importance as there are different types of definitions, which could all have impacts on future regulation in this area. We strongly agree with the Commission’s view expressed in the White paper that “the definition of AI will need to be sufficiently flexible to accommodate technical progress while being precise enough to provide the necessary legal certainty”.

In line with the Commission’s perspective, we would recommend to be inspired by the updated definition elaborated by the HLEGAI: “AI systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.  

As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).” 

However, the “goal-oriented” definition provided by the HLEGAI is a very comprehensive, “academic” definition, which wasn’t drafted with a legislative proposal in mind. Furthermore, it is easy to be misinterpreted in a non-computer science way. Our members suggest being very cautious when referring to AI as deciding the best action(s) to take to achieve the given goal” as the human designer of the system has to provide specific goals or challenges, so the AI can try to solve it as good as possible. Usually, the application of AI choices is performed manually or automatically by other mechanisms, actuator or expert systems. To be more easily understandable, we would suggest using a short version of the definition: 

“AI systems are software (and possibly hardware that embed software) systems that can act in the physical or digital dimension by perceiving their environment through data gathering. AI systems can interpret the collected data, reason on the knowledge, and learn from the environment, in order to improve their performance to reach the given goal or challenge”.

II. THE RISK-BASED APPROACH 

The Commission White paper proposes 7 mandatory requirements to be applied to “high-risk” applications (biometric identification, human oversight, robustness & accuracy, information provision, record keeping and training data) and suggests a two-step approach by

  1. determining sectors that present high risks like e.g. transport, health and energy
  2. determining the intended use of AI within these sectors or cross-sectoral

Factors that would be considered to calculate the high risk of a sector and the use of AI within this sector could for example be the legal impact on rights of citizens or businesses; the material or immaterial damage, the impact on the recruitment process, the purposes of remote biometric identification and other intrusive surveillance technologies.

While we welcome the risk-based approach and the idea of applying specific requirements proportionally to the risks of the AI application, the question remains to understand how it will be possible to practically determine what applies to high-risk sector and application of AI. For us, the Commission’s two-step approach seems problematic to the extent that AI for certain purposes would be always considered as high-risk as, for instance, the use of AI applications for recruitment processes; in situations impacting workers’ rights; and the use of AI for the purposes of remote biometric identification. In particular, these example exceptions are defined in a very open-ended way, making the scope broad and unpredictable.

Therefore, we strongly recommend using a “Matrixial” approach, with a functional and technical level, to define what is high risk. In this process, our members are convinced that the determining factor should be the high level of the interaction between the AI application and human beings. The “risk score” should define what level of requirement will be called for, on a mandatory or voluntary basis, and according to the proportionality principle. Indeed, bad business requirements can lead to a bias data selection, even if the model is transparent and robust (functional issue) whereas with good business requirements, but poor data treatment, there may be ethical issues (technical issue). In this line of thoughts, we believe that it is equally important to consider the probability of the realisation of damage and the opportunity of using AI.

In fact, one must consider the possible losses of hindering or considerably slowing down the development or the deployment of a promising AI system that might be subject to strict requirements because its use is considered highly risky.

Assuming that the determination of high-risk use of AI is appropriate, relevant and proportionate, we believe that prior market conformity assessment procedures should take place to ensure mandatory requirements are complied with, whether this entails testing, inspection or certification. Authorities could decide whether an AI system is ready to be applied, used and put on the market and a validating, labelling or certification process for human-centric and ethical AI could be set up. Nevertheless, this conformity assessment would have to be done in a relatively swift way to avoid significant impact on the placing of the AI system on the market, especially considering how fast the technology is evolving.

III. THE REQUIREMENTS 

  • Specific requirements for biometric identification 

EU data protection rules prohibit in principle the processing of biometric data for the purpose of uniquely identifying a natural person, except under specific conditions. Specifically, under the GDPR, such processing can only take place on a limited number of grounds, the main one being for reasons of substantial public interest.

The application of biometrics is generally prone to privacy concerns because of the risk of bias or wrong recognition. From FEDIL’s point of view however, privacy aspects often highlighted for the training and the analysis of personal data can also be mitigated by technical means like specific encryptions that preserve the analytical aspects of the data. Therefore, it would be rather reductive to simply prohibit such use for biometric identification and, considering the advantages for users in terms of security and the opportunities for Europe’s industry to make such technologies safe and secure, it might be more adapted to enforce the use of advanced security measures, including encryption, liveness detection, and solid data governance.

Moreover, our experts underline that it is essential to distinguish the use of biometrics for verification purposes only from the use for identification and surveillance purposes. As a matter of fact, we risk stifling innovation by putting both under the same conditions. Therefore, we strongly welcome the distinction made in the White paper between biometric authentication/verification and biometric identification. This is the kind of precise definition of use cases that is required. On one side, biometrics can be used to facilitate and accelerate border management, passenger verification, secure physical or digital accesses and reduce fraud. In these cases of biometric verification, it is possible to promote responsible innovation and include the “privacy by design” principle in order to be compliant with GDPR. For example, users can opt-in on biometric verification for the use of their digital banking application and companies can ensure their data will be deleted when they opt-out. On the other side, it is more difficult to align the use of biometrics for identification purposes to the privacy by design principle because of the way data might be collected, handled and stored.

FEDIL recognises that it is always difficult to strike the right balance between the advantages and the negative impacts which can occur, when using biometric identification for purposes of public security for example. A common illustration is the noted bias of facial recognition systems to skin colour. Still, negative impacts can be mitigated. For instance, law enforcement can be trained so that tools are used correctly, with a high level of accuracy for predictions, and the AI tool can only be used as an aide where humans make decisions based on a multitude of sources.

In this context, we also welcome the intent to launch a broad European debate on the specific circumstances, if any, which might justify the use of remote biometric identification in public places, and on common safeguards.

  • Robustness and accuracy 

According to the White paper, “AI systems must be technically robust and accurate in order to be trustworthy. That means that such systems need to be developed in a responsible manner […]AI systems [must] behave reliably as intended.”

A number of citizens are sceptical towards AI. It is our belief that in many cases this scepticism is a cause of misinformation and therefore irrational. Therefore, FEDIL is convinced that AI has to be made as understandable for everyone as possible and foster the related research and industrial efforts. As AI filters into our society and its various technologies come closer to citizens, the need for security will grow. Our perception is that the digitalisation of industry and the rising number of connected devices makes cybersecurity and safety being an inevitable precondition for a stable digital economy and to ensure consumers’ trust. The proposal rightly foresees robustness and accuracy amongst potential requirements to be fulfilled by high-risk applications of AI. For AI to be as cybersecure as possible, our member companies see that testing must be constantly undergone and upgraded. Regular updates, also on the quality of the data itself, accompanied by an effective enforcement method via regulators or agencies would furthermore support technical robustness of AI algorithms.

FEDIL welcomes European certification and standards with the aim to build up consumers’ trust and to improve the security of AI products and services. Such an EU level harmonisation would facilitate cross-border business and lower unitary compliance costs, which is essential for European companies, including SME’s and start-ups. It is important to strike the right balance between different protection profiles and the need to adopt a broad and general notion of cybersecurity. A strict “one-size fits all” certification scheme would be unacceptable. It is crucial to reflect the usage and evolution of the product and services’ equipment, processes and risks.

Regarding accuracy, it has been defined by the HLEGAI as an “AI system’s ability to make correct judgements […] predictions, recommendations or decisions based on data or models”.

While we agree that an explicit and well-formed development and evaluation process can support, mitigate and correct unintended risks from inaccurate predictions, the concept will still require a different quantification or level of precision for different models. Furthermore, the importance of accuracy depends on the use of the AI system. FEDIL’s experts point out that the deployer should be able to identify what level of accuracy is required for their product/service to be trustworthy.

For companies to be prepared to this task, it will be essential to have a clear and precise definition of the method to measure accuracy and of the appropriate requirements. The accuracy requirements will also have to take into account that there will be a different level of accuracy:

  • When the model is trained in the lab, before being put on the market, factors like time and quality of the environment could predict a certain level of exactitude of the decision
  • When the model is influenced by factors in real life, circumstances could change the functioning of the algorithm only a little but enough to lead to a different level of accuracy.

In reality, accuracy, transparency or explainability aspects are very complex to realise. They can be mitigated by ethical guidelines and conversely more explainability can resolve some ethical aspects. We see it as the key underlying factors of many principles in AI and recommend it to be studied largely and precisely.

In addition, it is fundamental to enhance the general education and awareness about what AI can and cannot do. To gain and sustain trust and transparency, it is as important to inform citizens on the real use and impact of AI as not to distort the reality. Digital literacy will increase the general public understanding of what AI is and broadly how it functions and eventually generate trust. Luxembourg’s industry calls for major investments in skills and technical engineering as we have the confidence this will evolve and improve the way of understanding AI.

  • Training data 

First, the White paper suggests that high-risk AI systems should be trained through data that has been gathered and used according to European rules.

In line with the Commission’s proposal, FEDIL deems it very important that the data used to train the AI system meets EU safety standards and that reasonable efforts to guarantee non-discrimination (eg. data sets should be sufficiently representative of gender and ethnicity) and compliance with privacy requirements (e.g. GDPR) should be requested. Our members are already committed to these principles and apply them systematically in their operations.

Further, we recommend focusing on the quality of the training data, including appropriate diversity, lack of bias, viability of the direct source (i.e. signal, sensor) rather than the geographical source (i.e. non-EU). Indeed, the quality of the training data has profound implications for the AI model’s subsequent development. Without a foundation of high-quality training data that is adequate, accurate and relevant, even the most performant algorithms can be rendered useless. To determine the quality of the training data, different external and internal factors and their prioritisation have to be assessed together.

In many cases, gathering the data includes getting access to the raw data and choosing the important attributes of the data that would be good indicators of the outcome the machine learning model should predict. This is important because the quality and quantity of data will determine how good the AI model could be. In this context, FEDIL defends the idea of a strong governance strategy to make sure the “humans in the loop” who gather the data and prepare it for use in machine learning maintain the highest quality after every update.

  • Record keeping and proactive information sharing 

Second, the White paper suggests that keeping records and data could become mandatory for high-risk AI systems. According to the Commission, it should be possible to trace back to problematic decision making in order to allow certain verification, supervision and facilitate enforcement. Therefore, a description of the characteristics of the data set and how it was selected, the data set in itself, information on programming, training, processing and testing should be kept for a reasonable time period as well as be available upon request.

While we acknowledge the importance of tracing back the problematic decision making and therefore also the need for more transparency through proactive information sharing, it should remain voluntary. In fact, public transparency of major information will contradict with some areas of Intellectual Property Rights protection. Especially, considering some techniques used for the creation of model, such information sharing will reduce certain industrial advantages.

In data management, good practices exist. Yet, our experts show that the storage of the change log (audit log) and building a strong architecture is not always properly understood, is immensely time-consuming, expensive and takes up a lot of space. According to our knowledge, keeping every piece of information would thereby decrease the quality of the more important data.

Thus, companies and especially SMEs or start-ups don’t always have the resources to properly document and describe the functions or the role in detail when coding. Currently, it seems technically difficult to keep all the programming or frameworks that were used over the years and it is extremely complicated to determine when the data set was created or how to find the documentation of the whole process to create the algorithm.

Although an adapted retention policy could help mitigate issues related to data storage, it is very important for our industry to apply this kind of requirement only where the application of AI scores a high level in the risk matrix.

We notice obstacles at different levels:

  • At the model management, it is already possible to convert the generated models from one framework to another, breaking the precedence at the framework but also at version level.
  • At the data level, datasets are often modified, altered or even removed after the training of a model. It is then impossible to analyse the dataset at any moment. In practice, it is often the case that models are distributed but the dataset used to generate it, does not exist anymore.
  • At the configuration level, the parameters used for the training are nearly never kept but on a specialized AI platform, although these parameters are critical to understand the model performances.

A significant problem also resides within the predictability of data collected in a specific timeframe and its pertinence on the day where the damage occurs. Actually, data taken at some moment in time may reflect some past states still having correlations for a model. But the requirements should take into account that it might, on the contrary, not be useful anytime in the near future anymore. Here again, we want to point out that considering the business case, requirements can be more or less effective.

Not least, it is FEDIL’s conclusion that transparency requirements need to be known well ahead of time, in order to unlock potential opportunities and allow creators to be able to give the required information in advance. 

  • Human oversight 

Another very important requirement, which has already been put forward by the HLEGAI is “human oversight”. The White paper explains that the autonomous behaviour of certain AI systems may require human oversight from the product design and throughout the lifecycle of the AI products and systems may be needed as a safeguard.

FEDIL conceives AI to be made for humans in its end goal, providing solutions and opportunities to improve people’s lives. Hence, AI should always allow for human supervision/scrutiny over its use development in order to avoid negative impacts. Humans should keep the right to intervene. Although AI is there to make recommendations, it should be always up to a human to take the decision.

IV. THE LIABILITY FRAMEWORK 

  • Consumer protection 

According to the White paper, even though developers and deployers of AI are already subject to European legislation on fundamental rights, on consumer protection and on product safety and liability, the very specific characteristics of AI make the application and enforcement of this legislation more difficult. They could make it more difficult to trace the damage back to a person, which would be necessary for a fault-based claim in accordance with most national rules. The White paper explains that this could increase the costs for victims and that liability claims against others than producers may be difficult to make or prove. More exactly, the lack of transparency of AI would make it difficult to identify and prove possible breaches of laws, attribute liability and meet the conditions to claim compensation. For this reason, the Commission sees a need to examine whether current legislation can address AI related risks or whether adaptations are needed.

Whilst technological innovation should be allowed to continue to develop AI systems, our members fully agree with the need to ensure that people having suffered harm caused by the involvement of AI systems enjoy the same level of protection as those affected by other products.

Before changing legislative frameworks, we urge the Commission to carefully assess this objective and with more empirical data or case-law at hand. Even though the HLEGAI’s publications as well as the Report on safety and liability implications of AI provide many important thoughts on possible gaps, we always recommend being cautious proceeding with no or very little empirical data.

To a certain extent, our industry sees the need for a new liability scheme to accommodate existing regimes to the new technological realities. However, there is not only one possible set of rules but rather different use cases always referring to the producer’s or operator’s liability towards the end user. We recognise that possible new or amended rules on AI and liability do not exist in a vacuum but rather fit within EU’s broader AI framework. The envisaged broader regulatory changes (e.g. data quality, transparency, safety) will diminish the need for new liability rules (like changing the burden of proof). Adding stricter liability obligations or liability for unforeseeable risks – regardless of the ethical or responsible safeguards or framework – could reduce incentives to develop responsible and safe AI.

  • Burden of proof 

The Commission is seeking views whether and to what extent it may be needed to mitigate the consequences of complexity by adapting the burden of proof required by national liability rules for damage caused by the operation of AI applications.

We encourage the Commission to prioritize balance. Consumers need clear, workable rules. And innovators – including innovators of algorithms and future computing technologies such as artificial intelligence – need protection from liability in scenarios where that liability could not have been reasonably foreseeable. Without this protection, our companies will substantially slow the rate of innovation for fear of triggering unforeseeable consequences that could lead to significant liabilities. The remarkable innovations we have seen over the last twenty or more years only exist because the legal environment for them is right and balanced.

  • Strict liability 

The Commission’s report on safety and liability implications of AI preconizes the idea of a strict liability. In some sectors, strict liability already exists and therefore we understand the idea to introduce similar standards in AI scenarios. On the other hand, strict liability could slow down the uptake of AI in our economy.

While we acknowledge the need for regulation on AI, it shouldn’t lead to creating burdens for a competitive environment. Hence, strict liability should be limited to high risk situations where there is an objective need e.g. where other regulations are deemed insufficient to cover the risks. This kind of assessment could also evolve over time e.g. where sectors become safer after the introduction of AI.

The Product Liability Directive has been designed as a general regulatory framework to hold producers and/or intermediaries liable. Our experts stress that this liability regime is still fit for purpose and can be applied to new emerging technologies.

Further, the Product Liability Directive foresees an effective mechanism for consumers to seek damages when injured or in case of property damages by defective products. This is a strict liability regime which was specifically designed to ensure consumer safety in relation to straightforward products (“commodities”) for which the producer or manufacturer is best placed to detect/prevent defects.

We do not exclude that certain changes may be appropriate. However, expanding this regime to standalone software could become extremely problematic because it is increasingly offered as a service, with different characteristics and commoditization and usually also allowing for different degrees of human intervention (e.g. standalone software in the Digital Content & Service Directive and products with embedded software in the Sale of Goods Directive).

  • Allocation of responsibility  

Moreover, the Commission concludes there would be uncertainty as regards the allocation of responsibilities between different economic operators in the supply chain for example if AI is added after the product is placed on the market by a party that is not the producer.

In our opinion, the idea of a “phased approach”, placing liability with the actors deemed best positioned to address potential risks (e.g. developers where the risk arises at development phase or deployer where risk arises at use phase) needs further assessment, especially regarding the life cycle of AI, the control over the application and the proper documentation.

For instance, liability can arise in case of insufficient precautions have been taken when creating AI systems or when owning and using (or updating) the system. Enhanced cooperation between the different value chain players can be useful but the idea of joint and several liability could make it far more difficult for companies to manage their risks, regardless of how carefully they design their products.

We should therefore duly consider the impact of such a rule on innovation and whether there might be better ways to encourage accountable and responsible AI practices, including adherence to the Ethical Guidelines.

On the plurality of actors in digital ecosystems, it is important for our companies to note that while many innovative products, such as smart devices or robots, involve multiple producers (separate hardware and software producers, for example), this is also true of many physical products today (e.g., cars have many hundreds of suppliers). These are already effectively regulated by the EU’s existing product liability regime. As we see it, the regime’s simple and technology neutral framework means that even these more complex production scenarios can largely be addressed under its terms.

  • Insurance schemes 

EU law requires obligatory liability (third-party) insurance e.g. for the use of motor vehicles, air carriers and aircraft operators, or carriers of passengers by sea. Laws of the Member States require obligatory liability insurance in various other cases, mostly coupled with strict liability schemes, or for practising certain professions. New optional insurance policies (e.g. cyber-insurance) are offered to those interested in covering both first- and third-party risks. Overall, the insurance market is quite heterogeneous and can adapt to the requirements of all involved parties.

However, we reckon that this heterogeneity, combined with a multiplicity of actors involved in an insurance claim, can lead to high administrative costs for the parties involved. This is due to the lengthy processing of insurance claims and unpredictability of the final result.

We have no doubt that it will have to be assessed whether new technologies could cause legal uncertainty as to how existing laws would apply (e.g. how the concept of fault would apply to damage caused by AI). Our members stress that this could in turn discourage investment as well as increase information and insurance costs for producers and other businesses in the supply chain, especially European SMEs. In addition, should Member States eventually address the challenges to national liability frameworks, it could lead to further fragmentation, thereby increasing the costs of putting innovative AI-solutions and reducing cross-border trade in the Single Market. It is important that companies know their liability risks throughout the value chain and can reduce or prevent them and insure themselves effectively against these risks.

In this perspective, the idea of strict liability and obligatory liability insurance schemes should be limited to exceptional cases when AI is used in ways that generate risks that are comparable to activity that is already subject to strict liability (e.g., automobiles/transportation) and where the uptake is sufficiently broad so that the risk can be spread proportionally. FEDIL views any other approach potentially problematic for AI developers. It menaces to disincentivize innovation by making it very difficult for developers to control their liability risk.

Les auteurs
Angela Lo Mauro
Conseillère affaires européennes auprès de la FEDIL
angela.lomauro@fedil.lu