In its fourth year, the Healthcare NLP Summit brings together thousands of the brightest minds in healthcare and artificial intelligence (AI) to discuss successes, failures, and lessons learned in the field. Given the rise of generative AI, there’s been no shortage of conversation topics, and the event will cover many of them. From using large language models (LLMs) for clinical decision support, patient journey trajectories, and efficient medical documentation, to enabling physicians to build conversational AI agents, scaling generative AI for the enterprise, this is not an event you want to miss.
Central to John Snow Labs’ mission of using AI for good, the event is free, taking place virtually from April 3-4, to ensure it’s accessible to all who wish to join. With speakers from Google, Microsoft Health Futures, Amazon, Stanford University, Novartis Health AG, and the World Health Organization (WHO), here are some of the trends that have emerged from this year’s program that will no doubt impact healthcare AI throughout the year.
Patient Journey Trajectories
Novartis Pharma AG will kick off one of the keynote sessions, taking us on an AI-driven patient journey. While many traditional LLMs only consider a patient’s diagnosis and age, Novartis has expanded that to include several multimodal records, such as demographics, clinical characteristics, vital signs, smoking status, past procedures, medications, and laboratory tests. By unifying these features, a far more comprehensive view of the patient is created, and thus, a more comprehensive treatment plan.
This additional data can significantly improve model performance for various downstream tasks, like disease progression prediction and subtyping in different diseases. Given the additional features and interpretability, LLMs can then help physicians make informed decisions about disease trajectories, diagnoses, and risk factors of various diseases.
Medical Chatbots
Another session from John Snow Labs will explore answering patient-level questions from raw clinical data. Combining structured—electronic health records, prescriptions—and unstructured data—clinical notes, medical images, PDFs—to create a complete view of a patient is critical. This data can then be used to provide a user-friendly interface, such as a chatbot to gather information about a patient or identify a cohort of patients who can be candidates for a clinical trial, population health, or research efforts.
In order to get the most out of a chatbot and meet regulatory requirements, healthcare users must find solutions that enable them to shift noisy clinical data to a natural language interface that can answer questions automatically. At scale, and with full privacy, to boot. Since this cannot be achieved by simply applying LLM or RAG LLM solutions, it starts with a healthcare-specific data pre-processing pipeline. Other high-compliance industries like law and finance can benefit from this approach, too.
No-Code Generative AI
AI is only as useful as the data scientists and IT professionals behind enterprise-grade use cases—until now. No-code solutions are emerging specifically designed for the most common healthcare use cases. The most notable being, using LLMs to bootstrap task-specific models. Essentially, this enables domain experts to start with a set of prompts, and provide feedback to improve accuracy beyond what prompt engineering can provide. The LLMs can then train small, fine-tuned models for that specific task.
This capability gets AI into the hands of domain experts, resulting in higher-accuracy models than what LLMs can deliver on their own, and can be run cheaply at scale. This is particularly useful for high-compliance enterprises, given no data sharing is required and zero-shot prompts and LLMs can be deployed behind an organization’s firewall. A full range of security controls, including role-based access, data versioning, and full audit trails, can be built in, and make it simple for even novice AI users to keep track of changes, updates, and continue to improve models.
Addressing Challenges and Ethical Considerations
Ensuring the reliability and explainability of AI-generated outputs is crucial to maintaining patient safety and trust in the healthcare system. Moreover, addressing inherent biases is essential for equitable access to AI-driven healthcare solutions for all patient populations. Collaborative efforts between clinicians, data scientists, ethicists, and regulatory bodies are necessary to establish guidelines for the responsible deployment of AI in healthcare.
This is precisely why The Coalition for Health AI (CHAI) was established—and why members will give a keynote presentation at the show. CHAI is a non-profit organization tasked with developing concrete guidelines and criteria for responsibly developing and deploying AI applications in healthcare. Working with the US government and healthcare community, CHAI creates a safe environment to deploy generative AI applications in healthcare, covering specific risks and best practices to consider when building products and systems that are fair, equitable, and unbiased.
Healthcare is on the forefront of AI, as seen by exciting new advances and use cases. The integration of generative AI, especially, has been done carefully, addressing technical challenges, ethical considerations, and regulatory frameworks along the way. It’s events like the Healthcare NLP Summit that bring these best practices and lessons learned to light. Registration is still open, and we hope to see you there!
David Talby
David Talby is CTO for John Snow Labs.