THIRD PARTY AI RISK MANAGEMENT

There is an intensifying clamour around artificial intelligence (AI), which is transforming the way people work and deliver services. Given the benefits it offers to employees, organisations and society, interest in AI will continue to grow. There are over 18,500 AI startups in the US alone, according to Tracxn. As it becomes more sophisticated, AI applications will increase.

Companies may rely on third party AI tools to carry out various tasks, such as data analysis, natural language processing, business logistics and customer interactions. Such pre-built solutions can save a company time, money and resources. When used responsibly, AI can provide companies with the principles, policies, tools and processes toward bettering individuals and society while achieving transformative business impact.

Among the common motivators for organisations to adopt AI are improved efficiencies and processes, new and enhanced modelling capabilities, more informed decision making, enhanced data analytics, control environment optimisation, and automation. Maintaining competitive advantage and fear of falling behind competitors also play a part.

Risks associated with using AI

But while AI offers many benefits, such as increased efficiency, accuracy and innovation, it also comes with privacy, cyber security, ethical and compliance risks. These risks are amplified when organisations use or integrate into their own systems third party AI tools.

A survey by MIT Sloan Management Review and Boston Consulting Group (BCG) found that 78 percent of organisations were highly dependent on third-party AI, while 55 percent of all AI failures originate from third-party tools. Despite this, only 45 percent of organisations have formal processes to assess the risks of third-party AI, indicating a gap between the use and the management of third party AI tools.

Jul-Sep 2024 Issue

Richard Summerfield