The European General Data Protection Regulation (GDPR) applies to the processing of any information that, either alone or in combination with other information, can identify individuals. This broad definition can lead to challenges in the world of artificial intelligence, as AI systems process ever increasing amounts of personal data, and automated decisions about and profiling in respect of individuals using such systems escalates.
For this reason, it is critical that all stakeholders consider data protection issues throughout AI-assisted projects.
Various data protection issues should be considered when using AI systems to process personal data. For example, all processing of personal data under the GDPR must have a lawful basis. Different lawful bases may be appropriate during different phases of an AI system’s lifecycle.
For example, if an AI system developed for an all-round task (such as facial recognition) is then used for different purposes (such as crime prevention) the appropriate lawful bases may differ during the development and deployment phases.
Often, the lawful basis relied upon will be legitimate interests; however, this may not always be suitable (e.g. if the intended use of data subjects’ personal data would cause unnecessary harm or would not be expected). If consent is relied upon, care must be taken to ensure that valid, freely given, specific, informed and unambiguous consents, including a clear affirmative act based on genuine choice, are obtained and consent withdrawals must also be respected.
GDPR’s Risk-Based Approach
Regarding governance and accountability, the GDPR’s risk-based approach requires organizations to implement measures appropriate to their particular situation (i.e. the context, purposes, extent and nature of the proposed processing and the resulting risks to individuals’ rights and freedoms).
In the case of AI, particular risks to individuals’ rights and freedoms and the circumstances of the processing mean that an appropriate balance will need to be struck between different interests to ensure that data protection law is adhered to. However, a “zero-tolerance” approach to these risks is neither realistic, nor required by law.
As the use of AI often involves personal data processing which is likely to result in a high risk to the rights and freedoms of data subjects, data protection impact assessments (DPIAs) will likely be required prior to processing and are another important aspect of accountability. Assessing the need to use an AI system as part of a DPIA can show that the purpose of the processing could not be achieved using less invasive methods. Proportionality should also be considered.
Ensuring Personal Data Accuracy, Privacy
Personal data accuracy is also important. The GDPR requires personal data to be accurate and, where necessary, up to date and statistical accuracy is also significant in the context of AI systems.
AI systems used to make inferences about people must be statistically accurate enough for their intended purposes to ensure fairness. While not every inference has to be correct, the fact that inferences could be incorrect and the impact on any decisions made that are based on such inferences must be considered.
Much has been written of the potential for bias in AI systems, which can lead to outputs that have unjustified discriminatory impacts on individuals.
The GDPR tackles issues regarding unfair discrimination in various ways, including through the fairness principle and through the stated aim of protecting individuals’ rights and freedoms in respect of the processing of their personal data. Technical approaches should also be taken to minimize the risk of discrimination in machine learning.
Ensuring that individuals’ privacy rights are respected when using AI systems is also a significant consideration, including rights in respect of information, access, erasure, rectification, data portability, objection and restriction of processing.
Although use of personal data in AI systems may make it more difficult to comply with individuals’ data protection related rights, such rights should be considered at each stage of the development and deployment of AI systems and requests from data subjects in respect of their rights should be addressed in accordance with the GDPR’s requirements.
Notably, the GDPR gives individuals specific rights when personal data processing involves solely automated decision-making which has legal or similarly significant impacts upon individuals, or profiling. Certain information must be provided to individuals about this processing and individuals also have rights regarding decisions made about them (e.g. the right to obtain human intervention, give their point of view, challenge decisions made about them and have the logic of the decision explained).
Personal data security is also significant in the context of AI systems. Under the GDPR, personal data must be processed in a way that ensures appropriate levels of security against its unauthorized or unlawful processing, accidental loss, destruction or damage. AI can exacerbate known security risks and make them more difficult to control.
Appropriate security measures depend on the nature and extent of the risks that arise from the types of processing being carried out, but it should be noted that use of AI to process personal data can impact security and introduce new types of risks.
While AI systems can be of tremendous benefit, both to individuals and society more widely, organizations using AI systems must address the risks for data subjects’ privacy rights and freedoms. Data protection issues should be considered from the outset and monitored throughout the lifecycles of AI systems to ensure compliance with the GDPR.
This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.
Write for Us: Insight Guidelines
Author Information
Rohan Massey is a partner and co-chair of Ropes & Gray’s Data, Privacy & Cybersecurity practice.
Clare Sellars is counsel in Ropes & Gray’s Data, Privacy & Cybersecurity practice.