APPLYING AI/ML METHODS IN MODEL RISK MANAGEMENT

The uptake of artificial intelligence (AI) is increasing exponentially. However, there are growing concerns that proper governance is often not in place to control profitable conduct informed by AI and machine learning (ML). These valid concerns revolve around the question: ‘How can AI destroy my business?’

ChatGPT has taken the AI conversation from the data scientist community and into the mainstream. If used properly, it will provide companies with a competitive advantage. In fact, those companies not using it may potentially be at a disadvantage. But while there are many examples of firms using AI insights to drive profits, there is a dark side to AI use. There are concerns that students may use it to write papers, hackers may use it to identify vulnerabilities, and bad actors may use it to steal intellectual property (IP). Imagine if a similar model was trained on your browsing history, online purchases, location services and phone usage. Instead of using this information to suggest movies, a ChatGPT-like model could pull it all together in a way that would destroy your personal privacy forever.

Another major risk associated with the adoption of AI is the potential for information fabrication. As AI systems become more sophisticated, it will become increasingly difficult to distinguish between real and fake information. This poses a significant threat to businesses, particularly in the banking sector where accurate information is critical for making credit decisions. For example, if a credit decision is based on false information generated by an AI system, it could result in significant financial losses for the bank.

Apr-Jun 2023 Issue

SAS