The Information Commissioner’s Office (ICO) is rolling out a risk assessment toolkit for companies to help them verify whether their use of artificial intelligence (AI) systems violates data protection laws.

Available in beta, the AI ​​and Privacy Risk Assessment Toolkit draws on previously published regulatory guidance on AI, as well as other publications from the Alan Turing Institute.

The toolkit provides risk warnings that organizations can use when processing personal data to understand the impact this can have on the rights of individuals. It also suggests best practices that organizations can use to manage and mitigate risk and ensure they are complying with data protection laws.

It is based on an audit framework developed by its internal audit and investigation teams after a call for help from industry leaders in 2019, according to the ICO.

The framework offers a clear methodology for testing AI applications and ensures that personal data is processed in accordance with the law. The ICO said that if an organization uses AI to process personal data, they can have a high level of assurance that they are complying with data protection laws by using their toolkit.

“We are presenting this toolkit as a beta version, following on from the successful launch of the alpha version in March 2021,” said Alister Pearson, ICO’s senior policy officer for technology and innovation services. “We are grateful for the feedback we received on the alpha version. We now want to start with the next stage of development of this toolkit.

“We will continue to engage with stakeholders in order to achieve our goal of developing a product that offers real added value for people who work in the AI ​​field. We plan to release the final version of the toolkit in December 2021. “

The ICO has urged anyone interested in testing the toolkit in a live AI application to contact the regulator via email (