ARTIFICIAL INTELLIGENCE GUIDANCE ON RISK MANAGEMENT – ISO 23894 COMING SOON
A new standard for AI risk management – the ISO 23894 (AI risk standard) – has almost been published. The AI risk standard was submitted on 1 August 2022 as a final draft to the members of the International Standards Organisation (ISO). The members have eight weeks to vote on whether the standard should be approved. Following approval, the final draft will then be submitted for publication and published as an official international standard.
The AI risk standard will introduce a common framework around implementation and use of AI systems, a topic that many countries, including the UK, are actively engaged with.
The draft AI risk standard
The draft AI risk standard incorporates the pre-existing standard ISO 31000:2018 which provides general guidance on risk assessment (the general risk management standard). The general risk management standard describes: (i) the underlying principles of risk management (integrated, inclusive, continual improvement, structured and comprehensive risk management); (ii) how risk management frameworks should be integrated into significant activities and functions of an organisation; and (iii) how risk assessment processes and practices help to identify risk and ways to manage risk. It emphasises the importance of considering the context of AI in an organisation.
The draft AI risk standard uses ISO 31000:2018 as its base but goes further by suggesting that organisations that develop, deploy or use AI products, systems and services need to manage specific risks relating to this technology. However, it is not intended for the specific risk management of products and services using AI for objectives such as safety and security.
Oct-Dec 2022 Issue
CMS Cameron McKenna Nabarro Olswang LLP