Artificial Intelligence & Data Protection
Artificial intelligence (AI) is widely recognised as one of the defining industrial phenomena of the 21st century. In today’s technology-dependent world, its potential impact on business and economic development can hardly be understated. From possible groundbreaking use as a diagnostic tool in medicine, through the inevitable introduction of autonomous vehicles as widespread means of transport, all the way to the full-scale deployment as a social evaluation mechanism, it seems the sky currently is the limit for AI.
However, despite the merited excitement around the technology, it’s important to recognise that AI is still at its infancy in terms of development and has a significant way to go before it can reach the predicted heights. While AI offers a host of new opportunities in terms of business development and optimization of industrial processes, it simultaneously presents a considerable amount of practical and legal challenges that need to be adequately addressed in order to guarantee the respective legal interests of stakeholders.
Misconceptions regarding the legal status of AI can lead to the accumulation of substantial risk for businesses and result in attribution of liability, breach of mandatory regulations, contracts, or other. This could, in turn, result in lengthy and expensive legal disputes or incurrence of penalties from state regulators and create an unpredictable environment, which would deter investors or businesses.
Artificial Intelligence and the Data Protection Paradigm
Almost anyone would agree that one of the biggest concerns in Europe and around the world in relation to AI is how intrusive to our personal lives the AI Systems in fact are. Due to the fact that some AI systems are mainly targeted at analysing human behavior, their respective AI training data-sets mostly consist of personal data. Consequently, the relevant data protection legislation (in this case the General Data Protection Regulation/ GDPR) must be fully complied with in respect of the performed data processing within their development and operation to ensure the lawfulness and protection of the interests at stake. In that sense, what are the legal implications of AI in terms of the GDPR?
I. Artificial Intelligence and the Principles of the GDPR
As with all processing operations, the usage of personal data for the development of AI or through AI as a tool must first be assessed in view of the general principles of the GDPR. Often problematic in that regard are the principles of lawfulness, fairness, transparency (Art. 5, pt. 1, sub-letter a), purpose limitation (Art. 5, pt.1, sub-letter b) and data minimization (Art. 5, pt. 1, sub-letter c).
A. AI and the Fairness Principle
AI algorithms use specific input data-sets in order to be trained and function as intended by producing a certain output result. There are two potential problems related specifically to the chosen data-sets:
- the data-set may consist of “biased” data. Biased data is data, which is likely to systematically produce a particular result and is not objectively representative of the object of study. Consequently, this may cause the AI system to produce results that are incorrect or outright discriminatory, which is in direct conflict with the fairness principle;
- the data-set may consist of low quality or irrelevant data (including special categories of personal data). Low quality data is inaccurate and leads to the output of incorrect results, while irrelevant data relies on criteria that have no bearing to the desired function (thus also contradicting the purpose limitation and data minimization principles).
To illustrate the above, in the past few years there have been cases in the USA where AI algorithms (e.g. functioning in the fields of healthcare, criminal justice and employment) have been reported to produce biased and discriminatory conclusions towards people of colour and women.
Consequently, in order for AI systems, processing personal data, to be in accordance with the fairness principle, it must be ensured that they do not utilize biased or flawed data-set, which produces incorrect or discriminatory results.
B. Purpose Limitation
The principle of purpose limitation requires that the reason for processing personal data must be clearly established and indicated when the data is collected. The purpose of the processing also needs to be fully explained to the data subject, so that they can make an informed decision regarding consent where applicable, and be able to effectively exercise their rights under the GDPR. Further data processing is only permissible if it is compatible with the original purpose.
Machine learning often takes place by utilizing data-sets collected for other purposes. For this process to be in compliance with the GDPR, the controller must: 1) assess the existence of an applicable legal ground under Art. 6 (where it would usually come to the existence of a legitimate interest taking precedence over the rights and freedoms of the data subjects, as documented in a balancing test or a data protection impact assessment), and 2) consider whether the purpose of training of the AI algorithm is compatible with the initial purpose of processing. Compatibility as a concept is closely linked to fairness – the controller must consider any existing links between the purposes, the context of a collection of the information in particular in view of the relationship between the controller and the data subject, the categories of personal data processed and their sensitivity, the potential consequences for the data subject and the safeguards that could be implemented to mitigate such impact.
There is an interesting exception to the purpose limitation principle which permits the re-use of processed data for scientific research and statistical purposes among other categories, which are very closely tied-in with the development of AI. Since the regulation does not provide a definition of the term “scientific research” and the functions of different AI systems may vary greatly, it is not possible to ascertain in general whether AI development should fall in any of those categories. Instead, a legal analysis is to be performed by examining all relevant circumstances in each individual case.
C. Data Minimization
The development of artificial intelligence usually requires the processing of huge amounts of data. On the other hand, the principle of data minimization compels that the gathered data must be used in an adequate, relevant and limited manner, only to the extent that is necessary for achieving the purpose for which it was collected. Furthermore, this principle deals with the concept of proportionality of personal data processing, or in other words, it raises the problem of whether the specified objectives could be achieved in a manner that is less intrusive to the privacy of the natural persons. This may result in difficulties, as the application or function of AI systems can change during the course of their development or post factum. While it may be impossible to predict with certainty how a project or a business may evolve, the data minimization principle mandates that sufficient efforts should be taken to determine the exact needs of the algorithm and to select only the information that is actually relevant for those purposes.
These needs should be defined as clearly as possible beforehand in order to guarantee that the data subjects’ rights will be respected and excessive collection of data will not take place. Precisely outlined objectives for the application and use of the AI system in addition to a clearly laid out business plan are necessary to guarantee compliance.
One of the main reasons behind the adoption of the GDPR was the perceived goal to provide more clarity and certainty regarding the rights of data subjects in terms of data processing. This idea is deeply embedded within the regulation, none more so than within the principle of transparency. It lays down the bedrock upon which the specific provisions relating to the rights and obligations of controllers and processors, international transfers of data, etc. are subsequently placed. The articles of the GDPR are deliberately worded in an extensive and detailed manner in order to ensure a transparent application to the rules and safeguarding of data subject’ rights.
On the other hand, the artificial intelligence systems themselves have a very opaque nature, which has led them to be dubbed as “black boxes”. When an algorithm produces a certain result, it could be impossible to trace back the logical steps it took and determine the reasoning behind them. Essentially, the present technological level does not always allow us to analyse how and why the algorithm processes data in a certain way.
The opaque nature of AI seemingly comes into conflict with the principle of transparency in terms of the application of Art. 22 of the GDPR. The provision stipulates that a data subject cannot in principle be subjugated to automated individual decision-making, which produces legal effects concerning him or impacts him significantly. While initially, this text can seem to have a very wide scope of application, the reality is somewhat different. Firstly, it requires that the data subject should be the focus of a decision that is made through entirely automated means. This means that there cannot be some form of human intervention somewhere along the decision chain. Additionally, this automated decision should also produce a legal effect or a significant impact on the data subject.
For current AI systems, the aforementioned rule does not have too much bearing. Artificial intelligence is most widely used as supplementation or in auxiliary capacity to human activity. In that sense, the majority of the cases do not come into conflict with the text. Furthermore, there are three exceptions that permit the use of automated decision-making: 1) necessity of concluding or performance of a contract; 2) requirement to perform such processing due to national laws and 3) the explicit consent of the data subject. Should any of these three alternative conditions be present, automated decision-making is considered as lawful to the GDPR.
In the instances where automated individual decision-making is performed by an AI algorithm, it is necessary to provide general information about the data processing to the data subject. In particular, the controller must expressly specify the existence of automated decision-making and provide meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject. This does not require providing detailed technical specifications or the algorithm itself, instead, the controller must describe (as much as reasonably possible) in an understandable manner the general system functionality, the main aspects of the algorithm’s logic and the manner in which various categories of personal data are weighed towards one decision or another.
Furthermore, in the context of automated individual decision-making, controllers are obliged to “implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision” which correlates to the so-called “right to explanation” (hinted in Recital 71 of the GDPR) regarding decisions that have already been reached. The actual text of the GDPR, however, does not require that controllers provide individual explanations on the internal logic of the algorithm (opening the “black box”). Instead, such explanations would generally need to facilitate the data subject’s understanding of the extent and scope of the automated decision-making process, and the overall reasons for the specific decision. Such general explanation should enable the individual to exercise their rights under the GDPR, including the right to contest the decision or file a complaint, as well as to allow them to adjust their behaviour for the purpose of receiving another outcome within the same decision-making process (an “if-then” hypothetical statement).
Consequently, even though the inherent opaque nature of AI is seemingly at odds with the notion of transparency demanded by the GDPR, the actual provisions do not require any specific action to be taken in terms of its application to AI systems. Despite this, the principle of transparency remains a cornerstone for the regulation and all effort should be made to design AI systems to be as easy to understand and transparent as possible, in order to ensure full legal compliance.
In conclusion, while AI is a pioneering and rapidly developing technology, it does fit in with the existing legal framework for data protection. The GDPR was introduced with an intention to guarantee lawful and fair processing of personal information in an age where data is used as a commodity and resource. Through the introduction of several core principles, it manages to create a uniform system of safeguards for data subjects, which can be applied regardless of the used technology or mechanisms. This conclusion extends to artificial intelligence as well, the use of which can be easily exploited once the provisions and principles of the GDPR are sufficiently understood and adhered to.
The above material is provided for informational purposes only. It is not intended to, and does not represent legal advice and is not to be acted on as such. BOYANOV & Co.’s team would be happy to provide further information and specific legal advice and assistance subject to a specific engagement.
For further information on the subject, you may contact:
Nikolay Zisov, Partner, firstname.lastname@example.org
Deyan Terziev, Associate, email@example.com.
 Defined by the EU High-Level Expert Group on Artificial Intelligence as: “software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.” Within the current text the terms “AI” and “AI systems” are used interchangeably.
Copyright picture: pixabay.com