If you’ve ever been discharged from the hospital with a printed set of follow-up care instructions, you probably understand how a little personalization would have gone a long way. At best, you may be feeling as generic as the instructions. At worst, you could find yourself readmitted to the hospital due to expensive, unplanned – and preventable – follow-up care.
McKinsey conducted consumer research in 2021 “to examine the practical applications of personalization in discharge planning.” They found: “Roughly half of the respondents reported having high-cost follow-up care after their acute event (and as high as 73% of respondents who receive Medicaid).” Of these respondents, 33% reported it was due to a reason they considered avoidable, “such as not getting clear post-discharge instructions or receiving inadequate post-acute care.”
Now imagine receiving a custom packet – digital or physical – of post-care instructions that is tailored specifically to your unique procedure, doctor, and medication. This would be followed by personalized messages from your care provider to see how you are feeling and what questions you may have. Those answers then determine the next messages you receive for continued care. This hyper-personalized model for care is possible today with artificial intelligence (AI), machine learning (ML), and large language models (LLMs).
Data integration and privacy
LLMs generate content that appears human-like and assists with various tasks like writing and conversation, code generation, validation of outputs, and information retrieval and parsing from unstructured documents. All of these applications further enhance the user experience by bringing a more natural and effortless interaction with technology resulting in efficient outcomes and positive business impact.
Healthcare necessarily has strict data-sharing and privacy requirements. Whether it is concerns about data-sharing as related to the Health Insurance Portability and Accountability Act (HIPAA) or cross-network proprietary information, healthcare providers can be understandably hesitant to mingle data with outside sources. The great news is that LLMs can be securely integrated into an organization’s data, meaning the data the LLM is trained on is the organization’s alone.
AI-driven conversational agents can comprehend and respond to intricate user queries, making them ideal tools for providing healthcare customer support and improving the customer experience in a number of ways. This includes asking and receiving questions in any language and removing language barriers – key to ensuring patients understand their follow-up care instructions; simultaneously handling a high volume of inquiries to provide quick and accurate responses across an entire healthcare network; providing patient support anytime and in any time zone, and much more.
Data security as a priority
Healthcare business leaders can consider moving ML models or LLMs between public and private clouds and securely integrating company and patient data to enable a “market of one” strategy while ensuring proprietary data remains secure. For example, a healthcare provider might leverage cloud based LLM solutions for general marketing outreach. However, a payer may have different security needs, and hence a different LLM solution, for sharing personalized communications including patient account or credit card information. Instead, guidelines are needed to understand the what, where, and how to securely leverage LLMs.
Today, LLMs can bring new insight to the proprietary patient and organization data without compromising the security and integrity of the highly secure private data. Experts in genAI and LLMs can help mitigate compliance and security risks and ensure the communication platform leverages the best of public information with unique and proprietary patient data to enhance the personalization of the market of one strategy even further.
Patient customization
LLMs have changed how personalization works across every application almost overnight. Today, any business can quickly create content specific to a single person based on their individual experiences. In healthcare, LLMs give providers the ability to talk to a patient directly and account for their questions and concerns. As an individual’s healthcare needs change over time, LLMs can adjust hyper-personalized content to take those changes into account.
Providing this one-to-one journey for an individual patient costs less with the integration of LLMs and provides more and frequent communication to the patient. Healthcare communications can pivot to dozens or hundreds of different scenarios based on existing data, but how do we know it’s finding the right scenario at the right time?
LLMs can sort through millions of data points in seconds to summarize and provide potential guidance for an individual patient’s journey. It’s vital that in healthcare we’re not providing just an answer, but a good answer, particularly in follow-up care. Machine learning will follow the recommendations laid out by LLMs to see what the patient needs, and then it actively learns and adapts to that choice. Rather than generating likely scenarios, it understands why and drives answers based on the provider’s data that it has been trained to learn and know.
What’s next for healthcare?
There is a risk of ignoring LLMs in healthcare. Humans are necessary points of contact in the healthcare system, but they will never be able to respond to individuals with perfectly customized recommendations as quickly as AI can. In an emergency, humans should be the first step. But for follow-up care specifically, when a patient may have a dozen questions, LLMs can easily account for what care and literature that patient has received, what their health status is, and what potential complications and concerns can arise after a certain procedure. This could be the difference between a patient asking a few questions or having to be readmitted to the hospital.
Morgan Llewellyn
Morgan Llewellyn has a wealth of experience helping envision and implement data and AI solutions across government, healthcare, SaaS, IoT, retail, and manufacturing. As Chief Data and Strategy Officer for Stellar, an AI services company, he helps businesses assess their AI readiness, identify and prioritize AI opportunities, and implement solutions to securely improve end-to-end operations and financial outcomes. Morgan’s notable career achievements include a SaaS product of the year award, writing the algorithm used to de-identify Medicaid data, and a Brandon Hall AI innovation of the year award. Morgan holds a bachelor’s degree from Hope College and a doctorate from Caltech.