Photo credit: Depositphotos
By Vatsal Ghiya
Healthcare is often thought of as an industry on the cutting edge of technological innovation. That’s true in many ways, but the healthcare space is also highly regulated by sweeping legislation such as GDPR and HIPAA, along with many more local guidelines and restrictions. Those legal hoops complicate the implementation of new tools and technologies such as artificial intelligence, which has become a hotly debated topic in the industry for good reason.
Artificial intelligence has the potential to disrupt industries around the world, but few hit quite so close to home as healthcare. According to a survey from HIT Infrastructure, 91% of healthcare insiders think the technology could boost access to care, but 75% believe it also threatens the data security and privacy of patient information. Consequences could be even more serious, however, and there’s also a concern that improper samples could lead to improper model generation.
Medical decisions are high-stakes, and AI algorithms are only as good as the data they’re trained with. Research from Gartner warns that as many as 85% of AI projects will deliver erroneous outcomes due to bias in data management through 2022. It’s an alarming statistic if those outcomes affect patient health, but it’s also not hard to see why this occurs. High-quality data is difficult to come by, and geographically diverse data even more so. An article published in the Journal of the American Medical Association found that the majority of data used to train AI came from California, New York, and Massachusetts — hardly a population representative of the world.
The Argument for AI
Despite significant obstacles, AI has real promise in healthcare, and it could disrupt everything from diagnoses to how healthcare workers interact with their other technologies. In the field of radiology, for instance, AI is helping doctors pinpoint early indicators of diseases such as cancer and helping radiologists analyze larger volumes of image sets. It’s the same story in pathology, where AI can sort through hundreds of tissue samples to find slides that humans might easily miss.
Thanks to voice-recognition success rates that have reached 99%, healthcare workers can now interact verbally with documentation software, increasing the quality and completeness of documentation while allowing nurses and physicians to spend more time actually caring for patients. With the right digital assistant on the front lines, electronic medical records can autopopulate from a simple conversation between a doctor and patient, saving valuable time and freeing healthcare workers from the necessary but tedious task of thoroughly documenting all of their encounters.
To ensure AI can accomplish the desired outcomes and overcome the obstacles, developers should improve training and ensure compliance by considering these three key elements:
1. Large volumes of accurate training data
Algorithms require massive amounts of accurate data, and that data is often difficult to come by. There are lots of examples of AI accurately diagnosing a disease, but the practice is often limited in scope to a single hospital or area because of training limitations. To create tools that revolutionize medicine around the world, developers need access to a comparatively impressive sample size of data.
2. Diversity of data accuracy to remove bias from results
Since AI’s inception, bias and inequality have plagued it. To overcome these obstacles, developers must collect data that’s representative of the greater human population instead of a single city or country. That means gathering and licensing data from all over the globe.
3. Data de-identification to remove PHI and PII
Storing large amounts of patient data from a wide variety of international sources is a recipe for a compliance nightmare — unless that data is carefully de-identified to eliminate all vestiges of identifiable information. De-identified data is no longer considered protected health information, making it easier to store, share, and use for building the next generation of AI tools.
AI has an incredible amount to offer healthcare, but only if developers can successfully navigate the perils it presents. With large, diverse, and de-identified data sets that improve accuracy and eliminate bias, developers can begin to tackle healthcare problems that once seemed insurmountable using the awesome power of AI and machine learning.
Vatsal Ghiya is a serial entrepreneur with more than 20 years of experience in healthcare AI software and services. He is a CEO and co-founder of Shaip, which enables the on-demand scaling of our platform, processes, and people for companies with the most demanding machine learning and artificial intelligence initiatives.