ICO consults on explaining Artificial Intelligence guidance
The Information Commissioner’s Office (ICO) has launched a consultation on draft guidance on explaining decisions made by Artificial Intelligence (AI). The draft guidance urges organisations to act in a transparent and accountable manner to encourage public trust in the use of the technology and avoid regulatory action.
Working alongside the Alan Turing Institute and in a move to provide clarity to an inherently opaque area of technology, the regulator’s draft guidance aims to help organisations explain, in a simple manner, to individuals affected by them how AI-related decisions are made.
The guidance is built around the principles of transparency and accountability contained in the General Data Protection Regulation (GDPR). The ICO is seeking to encourage organisations to proactively raise awareness of their use of AI and provide explanations of how solely automated decisions are made by documenting the provision of the explanation and assigning responsibility within the organisation.
The ICO will also urge organisations to consider the context they are operating in and the potential ethical impacts for an individual in ensuring decisions made using AI are explainable.
The guidance will apply directly where the processing of personal data in an organisation’s AI models, whether the system was purchased from a third party or built in-house, results in automated decision-making about individuals i.e. processes that do not involve any human involvement. However, the general principles under the GDPR will still apply even where there is meaningful human involvement in AI-assisted decisions which use personal data.
- The current legal position for data protection and AI
Data Protection law applies whenever an organisation processes personal data. It is technology neutral and so does not directly refer to AI or associated technology. However, the GDPR does contain provisions relating to large scale automated processing of personal data (profiling and automated decision-making) which are relevant to AI. Articles 13 and 22 give individuals the right to be informed of the existence of automated decision-making and the right not be subject to a decision made solely on automated processing.
To date, there has been limited guidance for controllers on the GDPR’s obligations to provide ‘meaningful information about the logic involved’ of automated decision-making and the ‘right to obtain human intervention’, respectively. The guidance will therefore be welcomed by early adopters of AI technology.
- The guidance
The draft guidance consist of three parts. Part I provides an overview of the issues and identifies six main types of explanations:
- Rationale explanation – the reasons a decision was made, expressed simply.
- Responsibility explanation – who is part of the AI development team and who to contact to request a human intervention.
- Data explanation – what personal data has been used in the training, testing and subsequent deployment of the AI model.
- Fairness explanation – the safeguarding activities taken to ensure decisions are fair and unbiased.
- Safety and performance explanation – steps taken to ensure the accuracy, reliability and security of decisions.
- Impact explanation – impact of AI use and associated decisions on individuals and wider society.
Part II is most relevant for technical teams and lays out a systematic approach to providing explanations of AI decisions, including selecting the correct explanation type, building a rationale for the explanation and the presentation of the explanation to the individual. Part III addresses governance issues, including the need to have in place organisational policies and procedures.
Whilst the guidance emphasises that there is no ‘one-size fits all’ approach to the provision of explanations, the ICO’s current proposal is highly prescriptive. The feedback the sector provides in response to the consultation is likely to centre on concerns about the viability and financial and resource cost to organisations of the de-mystification of this fundamentally complex technology.
The deadline for submitting feedback to the artificial intelligence consultation is 24 January 2020.
If you need legal advice on artificial intelligence and how it works with data protection, contact our team of experts.
This article has been co-written by Isobel Williams and Jon Belcher.