News

Algorithmovigilance, lessons from pharmacovigilance

Artificial Intelligence (AI) systems are increasingly being deployed across various high-risk applications, especially in healthcare. Despite significant attention to evaluating these systems, post-deployment incidents are not uncommon, and effective mitigation strategies remain challenging.

Drug safety has a well-established history of assessing, monitoring, understanding, and preventing adverse effects in real-world usage, known as pharmacovigilance.

In this article, drawing inspiration from pharmacovigilance methods, we discuss concepts that can be adapted for monitoring AI systems in healthcare, a concept coined as algorithmovigilance by Embi PJ in 2021.

We focus on 5 main principles from pharmacovigilance that can be transposed and adapted for the surveillance of AI systems in healthcare:

  1. Post-approval evaluation: The need for continuous assessment of AI systems after their approval.
  2. Case reporting: The importance of reporting incidents related to the use of AI systems.
  3. Data standardization: The role of standardized terminology in improving reporting and safety signal detection.
  4. Causality assessment: Determining the responsibility of AI systems in the occurrence of incidents.
  5. Adverse event dissemination: Ensuring that healthcare professionals and patients are informed of known and newly discovered AI-related incidents.

The article highlights that processes for AI oversight could be borrowed from the mature field of pharmacovigilance, while also recognizing the differences between pharmaceuticals and AI systems.

The article is a joint work with Dr. Mehdi Benchoufi, Prof. Theodoros Evgeniou, and Prof. Philippe Ravaud and was published in npj Digital Medicine on October 2, 2024.

By Alan Balendran

Members

Right
Back to top