RESPONSIBLE USE OF AI IN THE PHARMACEUTICAL INDUSTRY

Many articles have been written about artificial intelligence (AI) lately, mainly due to the hype around ChatGPT at the beginning of this year. Many of those articles can be broadly grouped into hailing “the biggest opportunity for mankind” or prophesying “the end of humanity”.

People can be placed into three camps. First, the early adopters – those in experimentation and discovery mode – eager to implement AI for their business activities. Second, the observers – those generally open to AI but without a full understanding of the technology and risks. And third, those who are either not interested or very sceptical, who suspect there is ‘witchcraft’ behind the technology.

For the pharmaceutical field, it is the potential use cases that spike our interest in not only traditional AI, but also the new generative AI (GenAI). The latest breakthrough in the development of large language models (LLMs), which is one type of GenAI, brings opportunities to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequity.

It is clear that we must ensure the ethical use and application of AI. We must also adhere to key values of transparency, inclusion, public engagement, expert supervision and rigorous evaluation.

Opportunities and risks

One of the key challenges that organisations face is the absence of a specific AI regulatory framework. The draft AI Act in the European Union (EU) has long been considered as the most advanced, and a benchmark for internal risk and compliance efforts.

Jul-Sep 2023 Issue

Novartis International AG