AI REGULATION
R&C: What are the main risks – real or perceived – associated with the use of artificial intelligence (AI) by companies? Which issues are currently of greatest concern to regulators?
Kerrigan: The main risk in artificial intelligence (AI) is adopting it slower than your competitors. To give a more technical answer, financial firms are very experienced users of classical AI. AI excels at pattern recognition, so it has been used by banks to detect fraud for decades, and its abilities and limits are well-known. Generative AI (genAI) has only been around since 2022, so we are still getting used to it. In particular, we are still working to understand how to make genAI give the same answer to the same question if it is asked once in the morning and once in the afternoon. For obvious reasons, that worries financial regulators.
Nahra: Part of the difficulty with identifying risk is to think about risk in connection with perspective – whose risk are we thinking about? For consumers, risks involve bias and discrimination and inaccuracy. For businesses, these same kinds of risks also come into play but from different angles. Companies have compliance risks, transparency risks, accuracy risks. Lots of the concerns for companies come from ineffective or insufficient governance programmes, where employees want to use AI and feel pressure to use AI, but the companies have not built in sufficient guardrails. There are lots of commercial risks from companies relying on AI for things like contracts – where without good human oversight, companies may make real mistakes without realising it. Regulators generally focus on the consumer risks – companies generally are on their own in protecting themselves from these commercial risks.
