BETTER PRACTICES FOR AI GOVERNANCE IN A CHANGING REGULATORY LANDSCAPE
The use of artificial intelligence (AI) models is changing rapidly, and regulators are catching up quickly. Risk management organisations should act now to ensure they are effectively controlling the risks these powerful models bring. Acting now will ensure a firm is positioned to capture the efficiencies and manage the risks, while ensuring alignment with pending and future regulations.
In 2018, generative pre-trained transformer (GPT), a type of AI language model, was developed by OpenAI. Within the last few months we have seen GPT-3, GPT-4, ChatGPT, LAMA, BARD, Midjourney, Stable Diffusion, and many other similar models.
Of course, financial services firms have been embracing machine learning (ML) and AI models for quite some time, but increasingly so over the past few years – and with good reason. ML and AI models allow the use of new data and derive new relationships or behaviour patterns, which results in more accurate models. These higher-performing models bring the promise of greater revenue, lower losses, better customer relationships and a competitive advantage versus peer institutions.
GPT-2, which was released in February 2019, is considered by many to be the first large language model (LLM), with 1.5 billion parameters. PaLM, released in early 2022 as a state-of-the-art LLM, had 540 billion parameters. PaLM was around 360 times larger than GPT-2. The famous ChatGPT is small but effective by comparison, with ‘only’ 20 billion parameters. These models clearly bring immense complexity, as do many other AI models that are far simpler, but still not understood well.
Jul-Sep 2023 Issue
SAS Institute