Experts Propose Specific and Suited Guidelines for the Use and Regulation of AI

Current Artificial Intelligence (AI) models for cancer treatment are trained and approved only for specific intended purposes. GMAI models, in contrast, can handle a wide range of medical data including different types of images and text. For example, for a patient with colorectal cancer, a single GMAI model could interpret endoscopy videos, pathology slides and electronic health record (EHR) data. Hence, such multi-purpose or generalist models represent a paradigm shift away from narrow AI models.

Regulatory bodies face a dilemma in adapting to these new models because current regulations are designed for applications with a defined and fixed purpose, specific set of clinical indications and target population. Adaptation or extension after approval is not possible without going through quality management and regulatory, administrative processes again. GMAI models, with their adaptability and predictive potential even without specific training examples - so called zero shot reasoning - therefore pose challenges for validation and reliability assessment. Currently, they are excluded by all international frameworks.

The authors point out that existing regulatory frameworks are not well suited to handle GMAI models due to their characteristics. "If these regulations remain unchanged, a possible solution could be hybrid approaches. GMAIs could be approved as medical devices and then the range of allowed clinical prompts could be restricted," says Prof. Stephen Gilbert, Professor of Medical Device Regulatory Science at TU Dresden. "But this approach is to force models with potential to intelligential address new questions and multimodal data onto narrow tracks through rules written when these technologies were not anticipated. Specific decisions should be made on how to proceed with these technologies and not to exclude their ability to address questions they were not specifically designed for. New technologies sometimes call for new regulatory paradigms," says Prof. Gilbert.

The researchers argue that it will be impossible to prevent patients and medical experts from using generic models or unapproved medical decision support systems. Therefore, it would be crucial to maintain the central role of physicians and enable them as empowered information interpreters.

In conclusion, the researchers propose a flexible regulatory approach that accommodates the unique characteristics of GMAI models while ensuring patient safety and supporting physician decision-making. They point out that a rigid regulatory framework could hinder progress in AI-driven healthcare, and call for a nuanced approach that balances innovation with patient welfare.

Gilbert S, Kather JN.
Guardrails for the use of generalist AI in cancer care.
Nat Rev Cancer. 2024 Apr 16. doi: 10.1038/s41568-024-00685-8

Most Popular Now

Can Language Models Read the Genome? Thi…

The same class of artificial intelligence that made headlines coding software and passing the bar exam has learned to read a different kind of text - the genetic code. That code...

Bayer and Google Cloud to Accelerate Dev…

Bayer and Google Cloud announced a collaboration on the development of artificial intelligence (AI) solutions to support radiologists and ultimately better serve patients. As part of the collaboration, Bayer will...

Study Shows Human Medical Professionals …

When looking for medical information, people can use web search engines or large language models (LLMs) like ChatGPT-4 or Google Bard. However, these artificial intelligence (AI) tools have their limitations...

Shared Digital NHS Prescribing Record co…

Implementing a single shared digital prescribing record across the NHS in England could avoid nearly 1 million drug errors every year, stopping up to 16,000 fewer patients from being harmed...

North West Anglia Works with Clinisys to…

North West Anglia NHS Foundation Trust has replaced two, legacy laboratory information systems with a single instance of Clinisys WinPath. The trust, which serves a catchment of 800,000 patients in North...

Ask Chat GPT about Your Radiation Oncolo…

Cancer patients about to undergo radiation oncology treatment have lots of questions. Could ChatGPT be the best way to get answers? A new Northwestern Medicine study tested a specially designed ChatGPT...

Can AI Techniques Help Clinicians Assess…

Investigators have applied artificial intelligence (AI) techniques to gait analyses and medical records data to provide insights about individuals with leg fractures and aspects of their recovery. The study, published in...

AI Makes Retinal Imaging 100 Times Faste…

Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is...

SPARK TSL Acquires Sentean Group

SPARK TSL is acquiring Sentean Group, a Dutch company with a complementary background in hospital entertainment and communication, and bringing its Fusion Bedside platform for clinical and patient apps to...

Standing Up for Health Tech and SMEs: Sh…

AS the new chair of the health and social care council at techUK, Shane Tickell talked to Highland Marketing about his determination to support small and innovative companies, by having...

GPT-4 Matches Radiologists in Detecting …

Large language model GPT-4 matched the performance of radiologists in detecting errors in radiology reports, according to research published in Radiology, a journal of the Radiological Society of North America...

ChatGPT Extracts Data for Ischaemic Stro…

In an ischaemic stroke, an artery in the brain is blocked by blood clots and the brain cells can no longer be supplied with blood as a result. Doctors must...