By David Talby, CTO, John Snow Labs
Responsible AI is the practice of designing, developing, and operating AI systems with good intentions. This includes empowering employees and businesses, as well as fairly impacting customers and society, enabling companies to engender trust and scale AI with confidence (Accenture). But creating models that are safe, inclusive, accurate, fair, reliable, explainable, and bias-free is easier said than done.
This is especially important in industries like healthcare and life sciences, when AI models are in the driver’s seat, automating real-life clinical decisions. Consider a medical device that determines how much of a drug to administer to a patient. If you’re using models trained on data that is not inclusive of certain patient populations, this can be dangerous to people that the models were never optimized for.
So, what are the biggest concerns around responsible AI and how can we address them? For now, it’s a work in progress, but here are a few main areas to watch:
Optimizing Revenue & Growth Over Cures That Work
The commercialization of AI and hyper-growth pressures has put factors like time-to-market and revenue growth ahead of safety and efficacy. Wellness and mental health applications are a good example of this: Having become very popular in recent years, there is little to no clinical evidence that they work. While they’re not necessarily causing harm, they’re little to indicate they deliver on their promise.
The healthcare and pharmaceutical industries are more mature than other verticals in terms of AI ethics. Since “magic cures” have been marketed to desperable families and unsuspecting fools since the dawn of time, killing many in the name of quick profits, the pharmaceutical industry is one of more regulated ones. The development of mass production and statistics in the 19th century evolved into a system that largely ensures that new medications most likely do more good than harm.
AI today is where pharma was in the middle of the nineteenth century. There are no established best practices, quality control, or external oversight – and few companies are doing it well. We have the tools and best practices to solve some of these problems, but we’re not enforcing them broadly. At some point, this will change, but for now, it’s the onus on each business (and each one of us, practitioners) to ensure their AI is acting in the best interest of the people it serves.
Model Governance and Degradation
Model governance — how a company tracks activity, access, and behavior of models in a given production environment — is a key component for getting AI models safely into production and ensuring they stay that way over time. It’s important to monitor this to mitigate risk, troubleshoot, and maintain compliance. This concept is well understood among data scientists, but it’s also a thorn in their side, because current tooling is still in its early stages.
Rigorous testing and retesting is a good way to ensure models behave the same in production as they do in development. Looking at accuracy, bias, and stability are factors that all practitioners should be analyzing on a consistent basis. Validating models before launching in new geographies or populations should be done consistently. Going beyond a single metric and applying behavioral testing and exploratory testing are other important best practices that should be applied today.
The same rules apply to keeping models from degrading over time — which is to be expected. Testing, automating retrain pipelines, and measuring that was conducted before the model was deployed are all crucial to responsible and ethical AI. It’s far more likely to expect problems than optimal performance, and businesses and practitioners need to stay ahead of this.
Data Sharing and Privacy
When you consider all the work it takes to get models into production and keep them there safely, it’s understandable why 87% of data projects never make it to market. If that’s not hard enough already, add data security and privacy concerns and responsible AI becomes even more challenging. Businesses need to be especially vigilant when this comes to protected health information (PHI), that is, personal data about patients.
Fortunately, in highly-regulated industries like healthcare, some of this is built in, requiring anyone dealing with PHI and other sensitive information to abide by laws like HIPAA. But as threats become increasingly sophisticated and people share more and more information across applications, patient portals, and electronic health records (EHRs), the potential attack surface multiplies. Practitioners also have responsibility to go beyond the basic requirements of HIPAA, given how easy it is to re-identify data that has been “fully anonymized” by regulatory standards. Otherwise, AI systems can still put companies at risk for material reputation damage and civil liability.
Startling recent research found that more than half of internet-connected devices used in hospitals have a vulnerability that could put patient safety, confidential data, or the usability of a device at risk. While cloud and SaaS providers typically require customers to share data, we need to start exploring safer alternatives. For instance, by licensing software instead of requiring data sharing by calling third-party API’s, data never leaves the customer’s environment, and runs within its security perimeter and under all its controls. This approach significantly limits the attack surface for a breach.
AI has the potential to improve our world — it’s already doing so in the healthcare and pharmaceutical industries. But in order for AI to live up to its full potential, we need to move toward a future in which companies are accountable when they say AI-powered services and products work. Even without a widely-accepted framework for responsible AI, practitioners should acknowledge the limits of what they build and communicate it clearly. It’s the right thing to do.
The Editorial Team at Healthcare Business Today is made up of skilled healthcare writers and experts, led by our managing editor, Daniel Casciato, who has over 25 years of experience in healthcare writing. Since 1998, we have produced compelling and informative content for numerous publications, establishing ourselves as a trusted resource for health and wellness information. We offer readers access to fresh health, medicine, science, and technology developments and the latest in patient news, emphasizing how these developments affect our lives.