May 21, 2024

Generative AI – A Technology Catalyst – Is Revolutionizing Healthcare

In November, the company OpenAI unveiled ChatGPT, a publicly available generative artificial intelligence (AI) tool with the ability to converse with users. The world changed overnight. AI was suddenly available and accessible to organizations and individual users in a capacity never seen before, driving leaders across industries to consider the implications and utilities of this revolutionizing technology.

Generative AI is an advanced form of machine learning which draws upon a large language model (LLM), giving applications a unique ability to generate content in response to a user prompt or question. While historic AI models leveraged machine learning to perform specific tasks, generative AI relies on algorithms which draw from patterns and relationships informed by raw data to create novel content across various domains.

In an over simplistic generalization, generative AI uses data informed by statistical assumptions to generate the most likely response. You can input anything from “Who is Bill Frist?” to “Plan a 3-day visit to Nashville.” For each of these – and everything in between – you will receive a logical and tailored output. It’s truly amazing.

Since ChatGPT’s release, multiple other generative AI tools have been publicized, like Google’s Bard and Microsoft’s OpenAI GPT-4, and all of them perform tasks that have traditionally required human intelligence. The implications and potential for these types of technology to be embedded within a diverse set of business models is huge. Nowhere is this truer than in healthcare.

The Case of Healthcare

As a tool to supplement human thinking and capacity, both traditional and generative AI have opportunities for improving healthcare delivery through a variety of mechanisms. The examples below illustrate the ways in which recently enhanced traditional AI models as well as novel generative AI applications are driving healthcare innovation.

Enhanced Traditional AI

Enhanced Diagnostic Capabilities: AI is playing a role in helping providers more quickly and accurately make diagnoses. AI technology is able to spot abnormalities and detect cancers faster and with greater accuracy than humans. Indeed, it was five years ago when the FDA approved the first autonomous AI-based diagnostic medical tool. Developed by Digital Diagnostics, this enhanced form of AI detects diabetic retinopathy which causes irreversible blindness if not caught early.

Pharmaceutical Development and Access: The biotech industry is a natural fit for enhanced AI technologies and innovation, especially in developing pharmaceuticals. Enhanced AI catalyzes the research and design processes as well as improves drug simulation and testing. Historically, the drug development process has been long and expensive, but AI technology has shown great potential in shortening the time to clinical delivery, decreasing the overall cost of drug development and allowing lifesaving and life-improving pharmaceuticals to more quickly reach patients who need them.

Generative AI

Administrative Efficiency: Generative AI is predicted to have the most immediate impact in healthcare on streamlining administrative tasks and time-consuming, back-office functions. Administrative spending and waste are a huge problem for the U.S. healthcare sector. A Health Affairs report released earlier this year found that 15-30 percent of our total health spending was attributed to administrative costs, at least half of which was found to be ineffective or wasteful. Recent estimates suggest that the adoption of AI tools within our healthcare system could save the U.S. healthcare industry anywhere from $200 billion to $360 billion a year. This is an enormous opportunity to cut costs and limit wasteful spending. Young emerging companies like Carta Healthcare and CodaMetrix — two companies with Frist Cressey Ventures — are leveraging AI capabilities in this space. For example, Carta’s “Atlas” product pulls data from medical records and populates clinical registries, and, in doing so, allows clinic staff to focus on other tasks while increasing data availability and accuracy. And CodaMetrix is honing in on using AI-powered autonomous coding to reimagine one of the most costly components of a health system’s revenue cycle reimbursement models.

Patient Communication: Like administrative tasks, patient communication is a time intensive, tedious, and vital component of running a successful healthcare system. It is also naturally aligned for generative AI innovation. Work is underway to more completely integrate generative AI within electronic health records (EHRs) to assist with patient correspondence. Microsoft, for example, is looking to merge Epic’s EHR platform with their OpenAI GPT-4 model to support patient communication. And patients are already reported to prefer ChatGPT’s answers to medical questions more so than those provided by physicians. A recent study found that ChatGPT’s responses in fact rated higher in terms of both quality and empathy. As generative AI continues to revolutionize the patient communication process, we will undoubtedly see substantial improvement in patient satisfaction and in how patients navigate and interact with the healthcare sector more seamlessly.

Medical and Surgical Simulation: Generative AI can also create virtual patient simulations, allowing medical students and professionals to practice procedures and treatments in a risk-free environment, enhancing their skills and decision-making abilities. Traditional medical simulations often rely on pre-programmed scenarios, which can limit the range of experiences and challenges that students encounter. With generative AI, these simulations can be adapted in real-time to respond to the actions and decisions of the students, creating a more unpredictable and authentic learning environment.

Potential Risks

The hype surrounding generative AI is only matched by the fear it has generated in many business, community, and policy leaders, and rightly so. Generative AI and its applications have an enormous potential to radically transform healthcare for the good. But there must be oversight that enhances safety and protects against short- and long-term risks without stifling innovation. Several areas specific to healthcare will drive the policy conversation surrounding AI in the near future:

  • Accuracy: Generative AI platforms may be at risk of producing a phenomenon described as a “hallucination,” in which the generative AI model produces unrealistic or repetitive outputs that do not adequately or accurately capture the diversity of the training data. The potential generation of false positives and false negatives for diagnosis and treatment may lead to unnecessary or harmful care.
  • Bias: Generative AI runs the risk of amplifying biases, perpetuating health inequity, and promoting misinformation – even when appropriately used. Generative AI outputs are those that are the most statistically probable. This means that their accuracy is dependent on the quality and diversity of data sets used to inform them. Without proper oversight, advanced AI can unintentionally extend existing harms and inequities through the use of incomplete, inaccurate data and prejudiced algorithms.
  • Lack of Transparency: Generative AI applications operate as a black box producing results without explanation of process, further clouding the ability to verify the accuracy of given outputs. For example, there is concern over healthcare organizations leveraging generative AI algorithms which result in denial of benefits or prior authorization without the ability to defend the rationale of how the application arrived at a recommendation.
  • Intellectual Property and Privacy Concerns: As generative AI applications are reliant on large data sets to inform their ability to create content and generate responses, concerns have been raised regarding the ownership, confidentiality, and privacy of the underlying medical and clinical data used to train healthcare AI models. Researchers have identified risks regarding patient disclosures, opt-out ability, and data deletion as areas lacking clarity with the development and deployment of generative AI in healthcare.

Advances in generative AI have far outpaced any government or regulatory response. Moving forward, our policy makers must play an active role in mitigating the risks of AI when it comes to its capabilities and applications. And though regulation and policy implementation have been slow, Congress is beginning to assess ways to ensure that the AI revolution is deployed carefully and equitably.

In recent remarks on the floor of the United States Senate, Majority Leader Chuck Schumer (D-NY) stated:

“Of the many things yesterday’s briefing made clear, one of them was that government must play a role in making sure AI works for society’s benefit. The private sector has made stunning progress innovating on AI, and Congress needs to be careful not to curb or hinder that innovation. But we are going to need guardrails, and the only agent that can do that is government.”

Looking Ahead

AI is undoubtedly a technology catalyst – and we are just scratching the surface of what it can do. As we continue to look ahead at future innovations and adaptations of AI, we must prioritize its use as a support tool to augment and support human intelligence. It is not a tool to replace it.

Already, AI is revolutionizing the healthcare industry by limiting administrative burden and waste, enhancing patient communication and diagnostic capabilities, and augmenting the capabilities of the biotech world through drug discovery. As we continue to find ways to integrate AI capabilities within healthcare, we will find increased efficiency and cost-savings, improved productivity and treatment options, and, most importantly, better outcomes for patients.

Parallel to these remarkable innovations, we must prioritize developing guardrails that protect against the misuse of AI models, that work to make AI systems safer, and that set the stage for sound deployment of AI tools for decades to come. We must work to ensure that generative AI continues to be implemented equitably and appropriately. And to do so, sound policy that protects against potential harms while maintaining an environment ripe for innovation must lead the way.

Leave a Reply

Your email address will not be published. Required fields are marked *