Photo credit: Depositphotos
By Dr. John Showalter, MD, MSIS, Chief Product Officer of Jvion
● Healthcare has always lagged behind technology adoption in the enterprise — AI included. But as regulations catch up to enterprise AI, clinical AI holds lessons for balancing data privacy and data sharing under regulatory scrutiny.
● In healthcare, AI has thus far mostly been adopted in silos across departments. But with new interoperability regulations, AI can leverage data lakes that consolidate data from every patient interaction, regardless of provider, while protecting patient privacy.
● As the FDA moves to regulate AI as software-as-a-medical-device, these data lakes will also be an avenue to certify that AI is continuously learning, improving and following quality assurance standards.
Healthcare has always dragged its feet when it comes to digital technology. In 2019, a survey by Ernst and Young revealed only 32% of physicians and 27% of consumers in the U.S. rated their healthcare system as performing well in terms of introducing digital technologies. Meanwhile, most electronic health record (EHR) systems still evoke the early days of the internet, with outdated and tedious interfaces that get an F in usability.
But when it comes to AI, healthcare can be a model for other industries.
Recent films like The Social Dilemma reflect a growing awareness that the algorithms quietly shaping our lives behind the scenes are almost entirely unregulated. With that awareness comes a growing backlash against Big Tech. Google and Facebook have already been hit with antitrust suits. As a new administration takes office, enterprises using AI will likely face unprecedented regulatory scrutiny on data privacy, security, bias, and quality control.
Healthcare is no stranger to regulations. In fact, strict regulations are no doubt partly responsible for the slow adoption of digital technology in healthcare. But the recent success of clinical AI in the current and proposed regulatory environment models a path forward for enterprises facing their own more regulated future.
Balancing Privacy and Data Sharing
HIPAA requires some of the tightest privacy regulations of any industry. In many ways, these privacy protections have limited clinical AI’s potential, trapping data in silos and preventing AI and its users from seeing a complete picture of their patients’ medical history. This also siloed AI itself. Hospital AI tools, from scanning MRI images to predicting patients’ risk for hospitalization, are adopted in isolation. These AIs are not collaborating.
However, interoperability rules finalized last year pave the way for greater data sharing while maintaining patient privacy. With these rules in place, we can securely share data across health systems and break down the silos that prevent providers from comprehensively understanding the needs of their patients.
The rules standardize APIs that would enable EHRs to connect to distributed data lakes, stored on cloud or hybrid-cloud platforms, linking digital endpoints across the care continuum. Data on every patient interaction — blood test results, bills paid, prescriptions filled, demographics updated — can be shared with strong authentication and consolidated in one place, accessible to any care team making decisions for that patient. AI can then analyze this data to help the care team make more informed decisions.
Data lakes can further evolve to create “AI lakes” — centralized places where AI can access the full patient record, analyze the data, and share the output with other AI applications. Working together, these AIs can drive better decisions for both patients and the health systems treating them.
As regulators confront enterprise AI, the recent breakthroughs in healthcare’s interoperability rules show that data sharing and data privacy are not mutually exclusive. With greater data sharing, the enterprise can also leverage AI to make more informed decisions for their business.
AI Quality Control
Under proposed rules, the FDA would approve AI as software-as-a-medical-device (SaMD). Key to gaining approval would be evidence that patient outcomes impacted by AI are being tracked, predictions made by AI are continuously improving, and potentially, that algorithms are not biased to favor some populations over others.
Data lakes can also provide an avenue for regulators to validate that AI is working as it’s clinically intended and following quality control standards. Just as CLIA reviews and certifies that clinical laboratories are safe, appropriately handling specimens and accurately performing quality controls, the FDA could review the data lakes to certify AI for data processing and analysis, security, and insights that actually improve the quality of care.
Centralized lakes with all of a patients’ data would make it easier to install new AI without having to create new data environments every time. New AI systems can simply be plugged into the existing data lakes and harness the same data used to gain FDA approval for previous AI tools.
For this to work, data lakes need to be curated to hold the data relevant to assessing the AI’s performance. For example, for AI trained to recognize strokes in CT scans, the goal of the AI is to shorten the time between CT scans and life-saving procedures. To assess the AI’s performance, the data lake would need to hold the time of the patients’ scan, the time the procedure was performed, and validation that the patient did indeed have a stroke.
The same goes for enterprise. To make data lakes useful for AI, it’s important to think through what the intended outcome of the AI will be and make sure that the data lake contains the data necessary to validate that the AI is working as intended.
Lessons for Enterprise
AI is constantly evolving with any new data used to train it. Approving continuous learning systems to make a prediction is futile. As the AI evolves, predictions will change. Regulatory approval must be focused not on the one-time capabilities of AI, but of the process of training AI and assessing the results.
As we’ve demonstrated in healthcare, data lakes can be used to aggregate data for AI to drive informed business decisions, while protecting consumer privacy. They also provide a way for regulators to validate that AI Is working as it is intended and quality controls are being followed.
As a clinician, I am charged with protecting patients from harm. As an informatics professional, I know protecting patients means employing safe, effective, and regulated technology. Technology needs to be regulated to prevent abuse, negligence, and harm. As regulations move to protect people from AI in other industries, healthcare shows the way forward.
Conflict of Interest: Dr. Showalter is the Chief Product Officer of Jvion, a clinical AI provider. Jvion’s AI solution, the CORE™, incorporates a data lake that leverages thousands of clinical, socioeconomic, and experiential data points on over 30 million Americans.
Dr. Showalter is a passionate advocate for the application of advanced information technology to improve patient outcomes. His unique education in biomedical engineering, physiology, clinical informatics and internal medicine has allowed him to work at the intersection of those fields to positively impact patient care and health system efficiency. As Chief Product Officer at Jvion, Dr. Showalter led the development of the COVID Community Vulnerability Map, which leverages AI to identify the communities most vulnerable to severe illness and death from Covid-19. The map has since been viewed by over 2 million people — including members of the White House Task Force, FEMA, every branch of the US military, and state and local governments — and informed public health outreach efforts. His work has been recognized with cross-industry awards including ComputerWorld’s Premiere 100 IT Leaders and Modern Healthcare’s Top 25 Innovators of 2020. Dr. Showalter is dedicated to using his expertise to ensure that Jvion’s AI has the maximum positive impact for patients.