Generative ai, which can create and analyze images, text, audio, videos and more, is increasingly making its way into healthcare, driven by both big tech companies and startups.
Google Cloud, Google's cloud products and services division, is collaborating with Highmark Health, a Pittsburgh-based nonprofit healthcare company, on generative ai tools designed to personalize the patient admission experience . amazon's AWS division says it is working with unnamed customers on a way to use generative ai to analyze medical databases for “social determinants of health”. And Microsoft Azure is helping build a generative ai system for Providence, the nonprofit healthcare network, to automatically classify messages sent by patients to healthcare providers.
Notable generative ai startups in healthcare include Ambience Healthcare, which is developing a generative ai app for doctors; Nabla, an ambient ai assistant for practitioners; and Abridge, which creates analytics tools for medical documentation.
Broad enthusiasm for generative ai is reflected in investments in generative ai efforts targeting healthcare. Collectively, generative ai in healthcare startups has raised tens of millions of dollars in venture capital to date, and the vast majority of healthcare investors say generative ai has ai-shaking-up-digital-health-investors-funding-strategies-and-industry-outlooks/” target=”_blank” rel=”noopener”>significantly influenced their investment strategies.
But professionals and patients alike have mixed opinions on whether healthcare-focused generative ai is ready for prime time.
Generative ai might not be what people want
in a ai-and-machine-learning/us-patients-believe-generative-ai-can-improve-access-affordability” target=”_blank” rel=”noopener”>recent Deloitte survey, only about half (53%) of US consumers said they thought generative ai could improve healthcare, for example by making it more accessible or shortening wait times for appointments. Less than half said they expected generative ai to make healthcare more affordable.
Andrew Borkowski, chief artificial intelligence officer at VA Sunshine Healthcare Network, the largest health system in the U.S. Department of Veterans Affairs, doesn't think the cynicism is unwarranted. Borkowski warned that the deployment of generative ai could be premature due to its “significant” limitations and concerns about its effectiveness.
“One of the key problems with generative ai is its inability to handle complex medical queries or emergencies,” he told TechCrunch. “Its finite knowledge base – that is, the absence of up-to-date clinical information – and lack of human experience make it inadequate to provide comprehensive medical advice or treatment recommendations.”
Several studies suggest that these points have credibility.
In an article published in the journal JAMA Pediatrics, OpenAI's generative ai chatbot, ChatGPT, which some healthcare organizations have tested for limited use cases, was ai-fails-diagnosing-childrens-cases” target=”_blank” rel=”noopener”>found to make mistakes diagnosing pediatric diseases 83% of the time. And in evidence Using OpenAI's GPT-4 as a diagnostic assistant, doctors at Beth Israel Deaconess Medical Center in Boston observed that the model ranked incorrect diagnosis as its top response nearly two out of three times.
Today's generative ai also struggles with medical administrative tasks that are an integral part of doctors' daily workflows. In the MedAlign benchmark to assess how well generative ai can do things like summarize patients' medical records and search through notes, GPT-4 failed in 35% of cases.
OpenAI and many other generative ai providers They warn against trusting their models for medical advice. But Borkowski and others say they could do more. “Relying solely on generative ai for healthcare could lead to misdiagnoses, inappropriate treatments, or even life-threatening situations,” Borkowski said.
Jan Egger, who heads ai-guided therapies at the Institute for ai in Medicine at the University of Duisburg-Essen, which studies applications of the emerging technology for patient care, shares Borkowski's concerns. He believes the only safe way to use generative ai in healthcare today is under the watchful eye of a doctor.
“The results can be completely wrong and it is becoming increasingly difficult to maintain awareness of this,” Egger said. “Of course, generative ai can be used, for example, to pre-write discharge letters. But doctors have the responsibility to check it and make the final decision.”
Generative ai can perpetuate stereotypes
One particularly harmful way that generative ai in healthcare can get things wrong is by perpetuating stereotypes.
In a 2023 study from Stanford Medicine, a team of researchers tested ChatGPT and other ai-powered generative chatbots on questions about kidney function, lung capacity, and skin thickness. The co-authors found that ChatGPT responses were not only frequently incorrect, but also included several reinforced and ingrained false beliefs that there are biological differences between black and white people, falsehoods that are known to have led medical providers to misdiagnose health problems.
The irony is that the patients most likely to be discriminated against by generative ai for healthcare are also the most likely to use it.
People who lack health coverage… people of color, in general, according to a KFF study, are more willing to try generative ai for things like finding a doctor or mental health support, the Deloitte survey showed. If ai recommendations are marred by bias, they could exacerbate treatment inequalities.
However, some experts argue that generative ai is improving in this regard.
In a Microsoft study published in late 2023, The researchers said they achieved an accuracy of 90.2%. on four challenging medical benchmarks using GPT-4. Vanilla GPT-4 could not reach this score. But, the researchers say, through rapid engineering (designing cues for GPT-4 to produce certain results) they were able to increase the model's score by up to 16.2 percentage points. (It's worth noting that Microsoft is a major investor in OpenAI.)
Beyond chatbots
But asking a chatbot a question is not the only thing generative ai is good for. Some researchers say medical imaging could greatly benefit from the power of generative ai.
In July, a group of scientists unveiled a system called cpostponement driven by clinical workflow complementarity (CoDoC), in a study published in Nature. The system is designed to determine when medical imaging specialists should rely on ai for diagnosis compared to traditional techniques. According to co-authors, CoDoC performed better than specialists and reduced clinical workflows by 66%.
In November, a Chinese research team demonstration Panda, an ai model used to detect possible pancreatic lesions on x-rays. TO study showed Panda is very accurate in classifying these injuries, which are often detected too late for surgical intervention.
In fact, Arun Thirunavukarasu, a clinical researcher at the University of Oxford, said there is “nothing unique” about generative ai that prevents its implementation in healthcare settings.
“More mundane applications of generative ai technology are feasible in in the short and medium term, and will include text correction, automatic documentation of notes and letters and improved search functions to optimize electronic patient records,” he said. “There is no reason why generative ai technology, if effective, cannot be implemented in these types of roles immediately.”
“Rigorous science”
But while generative ai shows promise in specific, limited areas of medicine, experts like Borkowski point to technical and compliance hurdles that must be overcome before generative ai can be useful (and reliable) as a comprehensive care tool. sanitary.
“There are significant privacy and security concerns around the use of generative ai in healthcare,” Borkowski said. “The sensitive nature of medical data and the potential for misuse or unauthorized access pose serious risks to patient confidentiality and trust in the healthcare system. Furthermore, the regulatory and legal landscape surrounding the use of generative ai in healthcare is still evolving, and issues related to liability, data protection, and the practice of medicine by non-human entities still need to be resolved. .
Even Thirunavukarasu, as optimistic as he is about generative ai in healthcare, says there needs to be “rigorous science” behind patient-facing tools.
“Particularly without direct medical supervision, there should be pragmatic randomized control trials demonstrating clinical benefit to justify the deployment of patient-facing generative ai,” he said. “Adequate future governance is essential to detect any unforeseen damage following deployment at scale.”
Recently, the World Health Organization published guidelines advocating for this type of science and human oversight of generative ai in healthcare, as well as the introduction of audits, transparency and impact assessments on this ai by third parties. independent. The goal, as explained by the WHO in its guidelines, would be to encourage the participation of a diverse group of people in the development of generative ai for healthcare and provide an opportunity to express concerns and provide opinions throughout the process.
“Until concerns are adequately addressed and appropriate safeguards are put in place,” Borkowski said, “widespread implementation of medical generative ai can be…potentially harmful to patients and the healthcare industry as a whole.”