THE ISO’S AI GOVERNANCE STANDARD EXPLAINED

With the exponential growth of foundational artificial intelligence (AI) models such as ChatGPT and Bard in recent months, the spotlight on AI as a whole has intensified. In a recent Gartner, Inc. poll of more than 2500 executive leaders, 45 percent reported that the popularity of ChatGPT has prompted them to increase investment in AI.

Thus, it is important to understand how to manage the use of AI as effectively as possible by considering the most appropriate form of AI governance. In this regard, standards (and standardisation) can play an important role in the governance of AI and help to mitigate some of the risks associated with its use. Most recently, the UK government, in its White Paper ‘A Pro-Innovation Approach to AI Regulation’, emphasised the importance of technical standards.

This article will explore standard ISO 38507 on AI governance issued in 2022 by the International Standards Organisation (ISO). It will also consider how standards are created, explain why AI technologies are different to other technologies (and hence require an additional lawyer of governance) and suggest guidance based on ISO 38507 to improve AI governance within an organisation.

What are ISO standards?

A standard is a document that describes the best way of doing something. Covering a large range of activities, a standard could be about making a product, managing a process, delivering a service or supplying materials. A standard also provides rules, guidelines or characteristics for activities or for their results, aimed at achieving the optimum degree of order in a given context. It can take many forms. Apart from product standards, other examples include test methods, codes of practice, guideline standards and management systems standards.

Jul-Sep 2023 Issue

CMS Cameron McKenna Nabarro Olswang LLP