Organizations that use artificial intelligence to make decisions should be transparent and consider the effects on individuals, a U.K. regulator says in draft guidance.
The draft, unveiled Dec. 2 by the Information Commissioner’s Office and co-released by the Alan Turig Institute, the U.K.'s national data science and artificial intelligence institute, aims to help organizations explain AI decisions about individuals.
Organizations should consider AI systems’ context and effects in order to explain that their use won’t harm people’s wellbeing, according to the draft. The ICO is accepting comments on the draft until Jan. 24.
“The decisions made using AI need to be properly understood by the people they impact,” Simon McDougall, executive director for technology policy and innovation at the data protection agency, wrote in a blog post. “This is no easy feat and involves navigating the ethical and legal pitfalls around the decision-making process built-in to AI systems.”
Lawmakers and regulators are grappling with how to oversee artificial intelligence and other new technologies as some companies seek to use the systems for decision-making and efficiency.
Transparency about the use of AI-enabled decisions and accountability, which are components of the EU’s General Data Protection Regulation, are among the four principles in the U.K. guidance. Artificial intelligence that uses personal data falls within the scope of the GDPR and U.K. data protection law, according to the draft.
In the U.S., Democratic lawmakers have introduced the Algorithmic Accountability Act, which would require some companies to study and fix algorithms that result in biased or discriminatory decisions.