The chatbot is here to see you


the health strategist

institute for continuous health transformation
and digital health


Joaquim Cardoso MSc
Chief Researcher & Editor of the Site
April 27, 2023


ONE PAGE SUMMARY


Generative AI technology is being explored as a way to alleviate workforce shortages and allow doctors to spend more time with patients. 


  • Accenture reports that over half of healthcare organizations are planning to use some form of generative AI pilot program this year. 

  • Generative AI could potentially help patients understand doctors’ notes or listen to visits and summarize them. 

However, while limited research has shown chatbots to be decent at answering simple health questions, they have stumbled on more nuanced ones.


Steven Lin, executive director of the Stanford Healthcare AI Applied Research Team, is skeptical that generative AI will quickly become widespread in frontline healthcare. 


The Department of Health and Human Services recently proposed new rules that would require makers of AI in healthcare to open up their algorithms to scrutiny if they want HHS certification. 


Both tech companies and health systems have formed groups to address the potential trust issues.






DEEP DIVE








The doctor is AI


Politico
By BEN LEONARD, With help from Derek Robertson
04/27/2023 


This week, Digital Future Daily is focusing on the fast-moving landscape of generative AI and the conversation about how and whether to regulate it — from pop culture to China to the U.S. Congress. 

Read day one’s coverage on The chatbot is here to see you. AI havoc in the music business, day two on AI inventors, and day three on a proposed revolution in data rights.


The chatbot is here to see you.


That’s what many health care organizations envision in the future. As workforce shortages plague the health care sector, executives and tech companies are looking for ways to boost efficiency and let providers spend more time with patients and less time in their digital systems.


Generative artificial intelligence is one potential way to do that, by allowing chatbots to answer the kinds of health care questions that often leave doctors’ inboxes overflowing.


Right now, no fewer than half of health care organizations are planning to use some kind of generative AI pilot program this year, according to a recent report by consulting firm Accenture. Some could involve patients directly. AI could make it easier for patients to understand a providers’ note, or listen to visits and summarize them.



But what about… you know, actual doctoring? So far, in limited research, chatbots have proven decent at asking simple health questions. 


Researchers from Cleveland Clinic and Stanford recently asked ChatGPT 25 questions on heart disease prevention. Its responses were appropriate for 21 of 25, including on how to lose weight and reduce cholesterol.


But it stumbled on more nuanced questions, including in one instance “firmly recommending” cardio and weightlifting, which could be dangerous for some patients.


Steven Lin, a physician and executive director of the Stanford Healthcare AI Applied Research Team, said that the models are fairly solid at getting things like medical school test questions right. However, in the real world, questions from patients are often messy and incomplete, Lin said, unlike the structured questions on exams.


“Real patient cases do not fit into the oversimplified ‘textbook’ presentations that LLMs are trained on,” Lin told Digital Future Daily. “The best questions for LLMs like ChatGPT are those with clear, evidence-based, widely established, and rarely disputed answers that apply universally, such as ‘What are the most common side effects of medication X?’”


Lin is skeptical that generative AI will quickly become widespread in frontline health care. He pointed out that electronic health records systems were first rolled out in the 1960s, but it took until the 2010s to become ubiquitous.


“Soon, in the next 6–12 months, the hype will cool down and we’ll slow down and realize how much work still needs to be done,” Lin said.



Brian Anderson, chief digital health physician at MITRE, a nonprofit that helps manage federal science research, sees it moving quite a bit faster 

becoming widespread in the sector within a year or two, given the business proposition to allow providers to be more efficient.


“Algorithms… will help drive improved clinical outcomes in the near term,” Anderson said. “The comput[ing] power is there to really drive some of the value … it’s just a matter of time.”



The potential for explosive growth is exactly what worries a lot of people about generative AI: 


They see a highly disruptive and unaccountable new tech platform, still largely experimental, with nearly no brakes on its adoption across society.


Healthcare, though, is a little different. It’s a highly regulated space with a much slower pace of innovation, big liability risks when things go wrong and a lot of existing protections around individual data.



To help users determine if a model is appropriate for them, the Department of Health and Human Services earlier this month proposed new rules that would require makers of AI in healthcare 

as well as health technology developers that incorporate other companies’ AI — to open up those algorithms to scrutiny if they want HHS’ certification.


That health IT certification for software like electronic health records systems is voluntary, but is required when the technology is used in many government and private health care settings.


Micky Tripathi, HHS’ national coordinator for health IT, understands why the industry is excited about AI’s potential to help better inform care — but also said that the anxiety surrounding generative AI in the wider economy also applies to healthcare.


He didn’t mince words at a recent industry conference: People should have “tremendous fear” about potential safety, quality and transparency issues.



With that in mind, both tech companies and health systems are calling for steps to ensure trust in the technology, and have formed groups to address the issues.


The Coalition for Health AI, whose members include Anderson’s MITRE, Google, Microsoft, Stanford and John Hopkins, recently released a blueprint to facilitate trust in artificial intelligence’s use in health care

It called for any algorithms used in the treatment of disease to be testable, safe, transparent, and explainable, and for software developers to take steps to mitigate bias and protect privacy.


Anderson said he’s excited about the technology’s potential but is “nervous” about potential future capabilities that can be difficult to anticipate. He said physician inbox management and aiding with drafting letters could be strong early use cases.


“I’m hopeful those are some of the first areas of exploration, rather than helping me decide what to do with this patient that’s suffering from some kind of acute sepsis,” Anderson said.



In the race for AI supremacy, it might not be computing firepower that gives researchers a leg up.


In a study published today by Georgetown University’s Center for Security and Emerging Technology, the authors set out to learn more about how “compute” (the term of art for computing power) factors into researchers’ concerns around AI development. 

The answer? Not as much as you might think.


“More respondents report talent as an important factor for project success, a higher priority with more funding, and a more limiting factor when deciding what projects to pursue,” they write in the paper’s introduction, hence its title: “The Main Resource Is The Human.” 

Remarkably, this finding mostly held true across both private industry and academia, something the CSET researchers say indicates that policymakers’ focus on semiconductor manufacturing and competition with China might be misguided.


“In light of these results… this report suggests that policymakers temper their expectations regarding the impact that restrictive policies may have on computing resources, and that policymakers instead direct their efforts at other bottlenecks such as developing, attracting, and retaining talent,” the researchers write. — Derek Robertson



The European Commission concluded its first ever citizens’ panel on the metaverse this week — which produced some notable, if vague, policy recommendations.


Patrick Grady, a policy analyst at the Center For Data Innovation, has been chronicling the three-part citizens’ panel at his blog, and had a concluding recap this morning

Near the end he makes a notable observation: That citizens’ main concerns about the metaverse appear to be around accessibility and privacy, while the Commission’s own consultation appears to be focused on competition and industrial policy.


The panel’s top five priorities, in descending order, were to: 


  • Provide training programs to teachers” on VR technology; 
  • to establish clear regulation around anonymity;
  • to “guarantee for all citizens free and easy access” to information about VR; 
  • to research its impact on health; and 
  • to establish “terms and conditions” for data access and privacy in virtual worlds. 

The European Union will soon publish its own report on the panel, ahead of the Commission’s wider guidance for the technology now expected in June.

Derek Robertson


Originally published at https://www.politico.com on April 27, 2023.


Names mentioned


  1. Steven Lin — a physician and executive director of the Stanford Healthcare AI Applied Research Team
  2. Brian Anderson — Chief Digital Health Physician at MITRE
  3. Micky Tripathi — HHS’ National Coordinator for Health IT
  4. Members of the Coalition for Health AI, including:
  • Google
  • Microsoft
  • Stanford
  • Johns Hopkins

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share