Insight

Lawyers

risks-of-artificial-intelligence-safety-aspects

Risks of Artificial Intelligence – Safety Aspects

Industry solutions relying on artificial intelligence (AI) are growing faster than many had predicted. In addition to the important personal data protection considerations, another important question which this new and emerging technology is raising relates to the AI safety problems and their legal response.

So far, the European Union and Bulgaria have not adopted legislation specifically tackling the AI phenomenon and in particular – the safety concerns associated with AI. Although the existing rules, as supplemented by the currently applicable set of safety standards, may be sufficient to address some of the safety challenges brought by AI, in 2018 the European Commission has begun a process of systematically addressing the implications of the emerging technologies such as AI on the existing legal framework[1]. To date, its efforts have led to the publication of a system of soft law documents including, among others, the 2019 Ethics Guidelines for Trustworthy AI[2], the 2020 White Paper on AI[3] and the Commission Report on Safety and Liability Implications[4] accompanying the White Paper. These documents have brought forward some of the key regulatory issues related to AI and are likely to form the basic blueprint of the contemplated EU legislation in this field.

In any case, it is clear that in the past few years the European Union has begun to pay serious attention to the regulatory treatment of AI. In this context, AI safety will most certainly be a high priority for the EU. To confirm that, the European Commission has stated explicitly that: “AI systems should integrate safety and security-by-design mechanisms to ensure that they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned.”[5]

I. Existing safety rules

The currently applicable safety framework in the EU consists of sector-specific legislation (for example, the Machinery Directive, the Radio Equipment Directive, the Measuring Instruments Directives), complemented by the General Product Safety Directive (Directive 2001/95/EC), which mandates that all consumer products, even those that are not regulated by sectoral legislation, need to be safe. The standardization is also a key component of the existing product safety rules in the EU.

At national level, the General Product Safety Directive has been transposed into Bulgarian law primarily by the Consumers Protection Act which extends the applicable safety requirements not only to the products but also to the services provided on the Bulgarian market[6].

The reality is that a large part of the existing product safety legislation was adopted prior to the emergence of modern digital technologies such as AI. That is why it rarely contains provisions explicitly addressing the new challenges and risks of these technologies. The established definition of a product however is relatively broad[7] and purposefully technologically neutral, thus allowing for the application of the general product safety requirements to the widest possible array of products, including, for that matter, products incorporating AI technologies.

II. AI Safety Challenges

Analysing the new challenges brought by AI is a key step towards assessing the applicability of the existing safety rules in respect of this technology and in terms of evaluating the need for new legislation. That is why below we have examined some of the main challenges related to the utilization of AI from today’s perspective.

A.    Connectivity

Connectivity is a key feature for a lot of the emerging digital technologies. AI-powered products make no exception. Connectivity may efficiently transform the characteristics of many conventional products and services but on the downside, it could pose certain safety hazards.

In particular, connectivity may challenge the traditional concept of safety by making the respective product a doorway to cyber security risks. For example, AI assistants with certain software security gaps may make it possible for malicious third parties to gain unauthorized access to the control systems connected to the AI assistant, which could in turn pose a safety hazard. One possible approach to address those risks at the EU level would be to promptly and efficiently implement the EU cybersecurity certification framework for products, services and processes established by the Cybersecurity Act[8].

The loss of connectivity of products, which heavily rely upon this feature, may also entail risks related to safety. If, for example, autonomous vehicles utilizing the combination of connectivity and AI technology to navigate through the traffic and handle complex situations lose connectivity, a road accident could easily occur.

Although there are certain security-related provisions in the sector-specific legislation[9], the General Product Safety Directive does not provide for specific mandatory essential requirements against cyber-threats affecting the safety of users.

If, however, we take a closer look at the currently applicable legislation, we will notice that it applies an extended concept of safety which is linked both to the use of the products and to the risks which are to be addressed to make them safe. Depending on the respective piece of EU legislation, this concept may extend not only to the intended use of the product but also to the foreseeable use and in some cases even to the reasonably foreseeable misuse[10]. On the other hand, the currently applicable concept of product safety encompasses protection against all kinds of risks arising from the product, including not only mechanical, chemical, electrical risks but presumably also cyber risks and risks related to the loss of connectivity.

In any case, the lack of explicit legal provisions specifically addressing the connectivity risks may, in some instances, prove to be an obstacle for ensuring sufficient consumer protection and legal certainty. That is why, in its future legislative activity the EU may consider introducing explicit provisions to this effect within the scope of the relevant pieces of EU legislation, thus delivering on its objective to provide better protection of the users and the necessary level of legal certainty.

B.     Opacity

The opacity of AI systems is a major concern in terms of guaranteeing the safe exploitation of products placed within the market. Opacity (also known as the “the black box” feature) of AI systems denotes the impossibility to retrace the steps the computer algorithm took to reach a certain decision. This increases the level of risk associated with AI as a technology because if the manufacturer is not able to quickly and clearly identify certain malfunctions, the product could potentially cause material damage to property or even physical harm to users.

The existing product safety legislation does not address the increasing risks derived from the opacity of AI systems. A possible solution to the issue of opacity, which the EU is expected to consider, is the adoption of formal requirements for the transparency of AI algorithms, as well as for robustness, accountability, and when relevant, human oversight.  It is also possible to put in place a type of ex-post mechanism of enforcement, which increases the degree of overall control of these technologies and ultimately, the trust placed in them.

One possible route to achieve the desired outcome would be to impose obligations on the developers of AI algorithms to disclose the design parameters and metadata of datasets in case of accidents. While this resolution is likely to be viewed as obtrusive, the general public interest of safety may ultimately prevail over the business interests of the manufactures. In any case, it will be extremely important to strike a good balance between the protection of commercial interests and the general objective of safety.

C.     Data dependency

One of the core functionalities and unique characteristics of AI systems is the ability to produce certain outputs after processing large amounts of available data. As such, their operation is highly dependent on the accuracy and relevance of such data.

Although the currently applicable legislation does not explicitly address the risks to safety derived from faulty or biased data, the General Product Safety Directive requires that producers address the risks pertaining to the normal or reasonably foreseeable conditions of use of the product. Thus, depending on the respective “use” of the product, during the design and testing phases the producers of AI systems may be under an obligation to anticipate the data accuracy and its relevance for the safety of the product. For example, an AI-based system designed to detect specific objects may have difficulty distinguishing them in poor lighting conditions. In order to address these risks pertaining to the use of the product, the developers of the system should anticipate this potential drawback and include data consisting of objects set in both optimal and poorly lit environments. Another example relates to agricultural robots such as fruit-picking robots aimed at detecting and locating ripe fruits on trees or on the ground. A shortcoming in the datasets fueling those algorithms may result in these robots making a poor decision and as a consequence even possibly injuring an animal or a person.

Since data accuracy is an important factor for the proper operation of AI systems, it is possible that future legislation includes provisions outlining specific requirements, aiming to reduce the safety risks associated with faulty data as early as the design and development stage, as well as a mechanism to ensure that its quality is maintained throughout the whole duration of the exploitation cycle of the AI products and systems.

D.    Autonomy

Some AI-powered products are considered to be able to act relatively autonomously by perceiving their environment and performing certain functions without the need to follow a strictly pre-determined set of instructions. Compounding this, autonomy appears to be one of the unique features of AI technology but it also puts to the challenge the traditional concept of safety because of the unintended outcomes that the “behaviour” of those autonomous products may have.

The existing product safety rules already require from the manufacturers to consider the “use” of the products (intended use, foreseeable use, and/or the reasonably foreseeable misuse) throughout the products’ lifetime. Manufacturers should also ensure that the products are accompanied by instructions and safety information. Therefore, insofar as the future “behaviour” of AI products can be determined in advance in the course of the risk assessment carried out before the products are placed on the market, the autonomous behaviour of the AI-powered products should not give rise to any particular legal loopholes.

Nevertheless, the autonomy of an AI system may actually pose a safety hazard if its behaviour includes the performance of tasks which were not initially foreseen by the developers (in relation to, for example, an ongoing “self-learning” feature). In such a case, the risk assessment performed prior to the placing of the product on the market may no longer be sufficient to ensure that the product will be safe throughout its lifecycle. If compliance with the applicable safety requirements is affected because of the foreseen and intended use of the product being modified due to the autonomous behaviour of the AI system, it may be necessary to carry out a new risk assessment of the AI system.

Under the currently applicable legal framework, where producers become aware that a product placed on the market poses risks having an impact on safety, they are required to immediately inform the competent authorities and take action to prevent the risks for users. In view of the general nature of this requirement, it is possible that an ex-post risk assessment procedure focused on the safety impact of the autonomous behaviour is introduced for AI-powered products which are subject to important but initially unforeseen changes during their lifetime (e.g. different product function). In addition, the relevant pieces of legislation may include more specific reinforced requirements for manufacturers on instructions and warnings for users.

III. Conclusion

The universal character and potentially ubiquitous application of artificial intelligence clearly pose new challenges that the current legal framework may not always be suited to properly handle. In order to guarantee legal certainty and the smooth functioning of a multitude of economic sectors, changes and amendments to the existing legislation are expected to be introduced soon. However, meanwhile, it will be wrong to assume that AI as a relatively new widescale phenomenon would remain outside the scope of the currently applicable safety framework. Quite the opposite – the general nature of the already established safety rules and regulations and the fact that they are built upon the concept of technological neutrality means, at least to a certain degree, that they will apply also to the emerging technologies such as AI.

The above material is provided for informational purposes only. It is not intended to, and does not represent legal advice and is not to be acted on as such. BOYANOV & Co.’s team would be happy to provide further information and specific legal advice and assistance subject to a specific engagement.

 

For further information on the subject, you may contact:

Nikolay Zisov, Partner, n.zisov@boyanov.com

Trayan Targov, Associate, t.targov@boyanov.com


[1] Communication from the Commission on Artificial Intelligence for Europe (COM(2018) 237 final), dated 25.04.2018, available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN

[2] Ethics Guidelines for Trustworthy Artificial Intelligence, presented on 8 April 2019 by the High-Level Expert Group on AI, available at https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

[3] White Paper On Artificial Intelligence – A European approach on excellence and trust (COM (2020) 65 final), dated 19.02.2020, available at https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en

[4] Commission Report on safety and liability implications of AI, the Internet of Things and Robotics (COM (2020) 64 final), dated 19.02.2020, available at https://ec.europa.eu/info/publications/commission-report-safety-and-liability-implications-ai-internet-things-and-robotics-0_en

[5] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on Building Trust in Human-Centric Artificial Intelligence, (COM(2019) 168 final Brussels), dated 08.04.2019, available at https://ec.europa.eu/transparency/regdoc/rep/1/2019/EN/COM-2019-168-F1-EN-MAIN-PART-1.PDF

[6] In view of this, references to safety of products contained in this article would normally imply that same considerations apply also to the services supplied in Bulgaria.

[7] The definition in Art. 2(a) of the General Product Safety Directive states that: “product” shall mean any product – including in the context of providing a service – which is intended for consumers or likely, under reasonably foreseeable conditions, to be used by consumers even if not intended for them, and is supplied or made available, whether for consideration or not, in the course of a commercial activity, and whether new, used or reconditioned.

[8] Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act). (2019)

[9] For example, in the Measuring Instruments Directive (2014/32/EU), the Radio Equipment Directive (2014/53/EU), and the Medical Devices Regulation ((EU) 2017/745).

[10] See the Machinery Directive (2006/42/EC).

 

Picture copyright: pixabay.com