Medical AI could be ‘dangerous’ for poorer nations, WHO warns


health strategy journal (HSJ)

journal & research institute
for transforming and digitizing health care 

Joaquim Cardoso MSc.

Servant Leader,
Chief Research & Strategy Officer (CRSO),
Editor in Chief and Senior Advisor


January 20, 2024


What is the message?


The World Health Organization (WHO) has issued a warning about the potential dangers of introducing health-care technologies based on artificial intelligence (AI), particularly for people in lower-income countries. 


The organization emphasizes the need for guidelines and regulations to ensure that the development and deployment of AI technologies in health care do not perpetuate inequities and biases, especially for populations in under-resourced areas.


Key points from the article include:


Concerns for Lower-Income Countries: 


The WHO is cautioning that the adoption of AI technologies in health care could be risky for people in lower-income countries. There is a fear that if AI models are primarily trained on data from wealthier nations, the algorithms might not effectively serve the healthcare needs of underprivileged populations.


Guidelines on Large Multi-Modal Models (LMMs): 


The WHO has issued guidelines, particularly addressing large multi-modal models (LMMs), which are powerful AI models capable of processing and producing text, videos, and images. The guidelines aim to ensure that the rapid growth of LMMs in health care benefits public health without compromising safety and efficacy.


Global Cooperation and Regulation: 


The WHO emphasizes the importance of global cooperation and regulation in overseeing the development and use of AI technologies in health care. It warns against leaving the operation of these powerful tools solely in the hands of tech companies and suggests the involvement of governments, civil-society groups, and healthcare recipients in oversight and regulation.


Risk of “Model Collapse” and “Race to the Bottom”: 


The article mentions potential risks, including a “race to the bottom” where companies prioritize releasing applications quickly, even if they are ineffective or unsafe. There is also a concern about “model collapse,” where LMMs trained on inaccurate information contribute to disinformation cycles.


Industrial Capture of LMM Development: 


The WHO expresses concerns about the “industrial capture” of LMM development, where large companies dominate AI research, potentially crowding out universities and governments. The guidelines recommend post-release audits of LMMs and suggest ethical training for developers working on these technologies.



In summary, the WHO is urging careful consideration and responsible development of AI technologies in health care to avoid exacerbating existing inequalities and ensure that the global population, especially in lower-income countries, benefits from these advancements.




Nature
David Adam
January 2024


The introduction of health-care technologies based on artificial intelligence (AI) could be “dangerous” for people in lower-income countries, the World Health Organization (WHO) has warned.


The organization, which today issued a report describing new guidelines on large multi-modal models (LMMs), says it is essential that uses of the developing technology are not shaped only by technology companies and those in wealthy countries. If models aren’t trained on data from people in under-resourced places, those populations might be poorly served by the algorithms, the agency says.


The very last thing that we want to see happen as part of this leap forward with technology is the propagation or amplification of inequities and biases in the social fabric of countries around the world,” Alain Labrique, the WHO’s director for digital health and innovation, said at a media briefing today.


Overtaken by events


The WHO issued its first guidelines on AI in health care in 2021. But the organization was prompted to update them less than three years later by the rise in the power and availability of LMMs. Also called generative AI, these models, including the one that powers the popular ChatGPT chatbot, process and produce text, videos and images.


LMMs have been “adopted faster than any consumer application in history”, the WHO says. Health care is a popular target. Models can produce clinical notes, fill in forms and help doctors to diagnose and treat patients. Several companies and health-care providers are developing specific AI tools.


The WHO says its guidelines, issued as advice to member states, are intended to ensure that the explosive growth of LMMs promotes and protects public health, rather than undermining it. In the worst-case scenario, the organization warns of a global “race to the bottom”, in which companies seek to be the first to release applications, even if they don’t work and are unsafe. It even raises the prospect of “model collapse”, a disinformation cycle in which LMMs trained on inaccurate or false information pollute public sources of information, such as the Internet.


Generative AI technologies have the potential to improve health care, but only if those who develop, regulate and use these technologies identify and fully account for the associated risks,” said Jeremy Farrar, the WHO’s chief scientist.


Operation of these powerful tools must not be left to tech companies alone, the agency warns. “Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies,” said Labrique. And civil-society groups and people receiving health care must contribute to all stages of LMM development and deployment, including their oversight and regulation.


Crowding out academia



In its report, the WHO warns of the potential for “industrial capture” of LMM development, given the high cost of training, deploying and maintaining these programs. There is already compelling evidence that the largest companies are crowding out both universities and governments in AI research, the report says, with “unprecedented” numbers of doctoral students and faculty leaving academia for industry.


The guidelines recommend that independent third parties perform and publish mandatory post-release audits of LMMs that are deployed on a large scale. Such audits should assess how well a tool protects both data and human rights, the WHO adds.


It also suggests that software developers and programmers who work on LMMs that could be used in health care or scientific research should receive the same kinds of ethics training as medics. And it says governments could require developers to register early algorithms, to encourage the publication of negative results and prevent publication bias and hype.


doi: https://doi.org/10.1038/d41586-024-00161-1


Originally published at https://www.nature.com on January 18, 2024.

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share