the health strategist
platform
the most compreehensive knowledge portal
for continuous health transformation
and digital health- for all
Joaquim Cardoso MSc.
Chief Research and Strategy Officer (CRSO),
Chief Editor and Senior Advisor
December 21, 2023
What is the message?
This article delves into the challenges surrounding the regulation of artificial intelligence (AI) in the medical field and the responsibility associated with harmful errors.
It emphasizes the growing concern regarding chatbots and generative AI models applied to healthcare, as well as the lack of clarity on legal responsibilities in adverse events.
Executive summary
Key points
- Healthcare Chatbots: Companies like Doximity are rolling out medical chatbots, such as DocsGPT, to assist doctors in various tasks. However, ethical concerns arise when these chatbots perpetuate race-based medical inaccuracies, putting Black patients at risk.
- Regulatory Challenges: The absence of classification of chatbots as medical devices hinders the enforcement of rigorous regulations. This raises questions about responsibilities in cases of harmful errors and underscores the need for more comprehensive regulation.
- AI Models in Medical Practice: The rapid implementation of large language models in electronic health records, as seen in the partnership between Epic Systems and Microsoft, highlights the urgency of assessing the effectiveness of these models and the associated risks.
- Transparency and Testing Challenges: The lack of transparency in disclosing training data and the complexity of medical AI models raise concerns about their effectiveness and safety. Studies indicate that algorithms can influence medical diagnoses harmfully.
Strategies
- Comprehensive Regulation: There is a pressing need for more comprehensive regulations for chatbots and AI models applied to healthcare to ensure patient safety and diagnostic accuracy. Change control structures should be established to handle continuous updates of these algorithms.
- Transparency and Ongoing Evaluation: Transparency in AI model training practices and continuous assessments in clinical settings are essential strategies to ensure the reliability and effectiveness of these technologies.
Statistics
- Model Complexity: The complexity of medical AI models, as evidenced by over 500 FDA-approved models, highlights the need for more detailed approaches in risk and benefit assessments.
- Impact on Diagnostic Accuracy: Studies reveal that excessive reliance on AI models can impair the diagnostic accuracy of experienced medical professionals, emphasizing the importance of robust evaluations.
Conclusion
The lack of effective regulation for chatbots and AI models applied to healthcare raises critical questions about responsibility in cases of harmful errors.
The urgent need to establish comprehensive regulations, ensure transparency in training practices, and conduct ongoing evaluations are fundamental aspects to promote patient safety and confidence in the application of AI in medical practice.
The article underscores the importance of considering the constant evolution of these technologies and the challenges associated with their use in clinical environments.
Source: https://proto.life/2023/11/the-urgent-problem-of-regulating-ai-in-medicine/