AI Needs a Reality Check: Conventional Wisdom Is Killing Innovation

Updated on November 24, 2024

The arrival of penicillin revolutionized medicine by shifting the central question from “How do we contain this infection?” to “Is this infection bacterial?” For the first time, doctors had a targeted, life-saving treatment and old approaches focused on symptom management quickly became obsolete. Penicillin changed the questions we asked, requiring new ways of thinking and transforming care.

Similarly, AI has the potential to reshape healthcare but requires reframing the questions we ask and challenging long-standing patterns. AI may not replicate the singular impact of antibiotics, but it’s poised to be the first technology with similarly high stakes. Hospitals that fail to adapt to this shift risk becoming relics of the past, missing the opportunity to redefine patient care for the future.

The conventional wisdom of innovation inside the hospital

On the surface, everyone will say AI is the future, but viewing it through the same innovation lens we’ve used for decades risks missing its transformative potential for hospitals. Relying on policies and procedures forged long before AI was possible is like using pre-penicillin protocols in a world where antibiotics exist.

The current evaluation models are simply too slow and narrow to allow AI to deliver the financial or patient care impact hospitals need. If hospitals keep applying outdated frameworks to AI, they risk missing its potential, holding back progress at a time when the system can least afford it.

Regulatory approval and reimbursement  

Current approval processes for drugs and medical devices remain essential to patient safety.  For example, AI applications that perform critical diagnoses—such as determining if a lesion is cancerous—should be regulated rigorously, much like laboratory tests. In these cases, the stakes are clear-cut, and the right starting point is that such technologies undergo stringent regulatory oversight to ensure safety and reliability (broader cases for regulatory change notwithstanding). 

However, this approach only addresses a narrow set of issues and usage related to patient care. The majority of AI applications will not perform standalone, critical diagnoses. Instead, they will provide clinical decision support, a historically gray area in regulation.  While the FDA has acknowledged its authority over these tools, it has typically refrained from regulating clinical decision support systems as strictly as traditional medical devices. With AI’s role set to expand across numerous patient care contexts, a new approach to oversight is needed.

Rather than forcing clinical decision support AI into traditional regulatory molds, the focus should shift to transparency—ensuring AI models are built as claimed and that their real-world impact is documented and accessible. Ideally, this transparency would be driven by market expectations rather than additional regulatory layers, much like security frameworks such as SOC2 or HITRUST. Hospitals and AI developers should openly share how models are constructed, validated, and used in practice. This openness would create a competitive landscape where effectiveness and trustworthiness drive adoption, fostering a future of AI led by outcomes rather than compliance alone.

No one ever got fired for using IBM

For years, choosing IBM was the “safe” decision that shielded leaders from blame if projects failed. This mindset favored loyalty to legacy systems over exploring new, transformative technologies—and it crippled many organizations through disruptions like the Internet and cloud computing.

Hospitals today face a similar choice: maintain legacy software or embrace AI’s potential to transform patient care.  Unlike past tech waves—cloud, mobile, social—AI has the power to dramatically impact patient care and hospital financials. Previous tech advances improved efficiency but didn’t create the high-stakes consequences of Blockbuster vs. Netflix for hospitals (though in this case it will not be an outsider, but hospitals with strong and AI woven in disrupting those without).  AI, especially Vision AI that “sees” patients, brings this level of urgency to hospitals for the first time ever.

Hospitals now need a balanced approach that enables AI innovation to enter safely and effectively. Experienced IT leaders have mostly appropriately preached that scattered, department-level solutions create more integration headaches than benefits. But enforcing one-size-fits-all policies risks blocking the adoption of revolutionary AI capabilities.

Hospitals must build a framework that allows AI from innovative, forward-thinking vendors into the organization—regardless of brand. With strong IT leadership, hospitals can create a secure pathway for integrating AI, capturing its full potential while ensuring safety. Those who fail to adapt to this new risk—sticking only to EHRs and legacy systems for inpatient innovation—will be left behind as the transformation of hospitals accelerates around them.

Death by spreadsheet

ECMO, developed in the 1970s, initially offered hope for patients with failing hearts and lungs, but by the 1980s and 90s, enthusiasm waned due to high costs, risks, and inconclusive studies. On paper, ECMO seemed unjustifiable, yet it succeeded because dedicated doctors refined its use over time, revealing its immense value in intensive care.

Similarly, evaluating AI solely through RFI spreadsheets—where groundbreaking capabilities like cost-saving AI are reduced to “alerting”—creates a skewed perception of risk for hospitals. While planning is essential, it cannot replace real-world experience. Impact unfolds through initial adoption and continuous learning in patient care, not prolonged planning. Hospitals need frameworks supporting safe, real-world AI adoption, with ongoing monitoring to measure effects on patient safety, treatment, and diagnostics—achieving outcomes that spreadsheets alone can’t foresee or appropriately estimate.

The Way Forward: Embracing AI with Purposeful Leadership

Delivering AI innovation safely should be a central question for hospital leadership. Given the ongoing breaches across healthcare, current approaches to security and technology management have been mediocre at best. Applying the same principles to AI-driven innovation risks repeating past failures, limiting both security and the emergence of transformative capabilities. Hospital CEOs should find this unacceptable, especially as financial pressures increase and generational tools (AI) to improve financials and patient safety have arrived.

CEOs need to assess whether their current leadership can meet the demands of innovation. If not, they should consider separating the “run” (maintenance) side of technology management from the “build” (innovation) side. A CIO might focus on operations, while a Chief Digital or Chief Innovation Officer spearheads transformative projects. Each role should have a clearly defined scope, both internally and externally, to avoid overlap and ensure accountability.

Additionally, hospitals should benchmark their innovation capacity against internal metrics of pace and with peer institutions.  Metrics should go beyond security and uptime to capture the impact on care and cost efficiency. Do doctors and nurses see tangible improvements in patient outcomes due to AI? Can operational leaders directly link IT to reduced costs? How quickly can new solutions complete security reviews, integrate with ADT and core systems, and reach full rollout?

Simply mitigating risk will not lead hospitals into the future of abundance of capacity through AI. Hospital system CEOs must demand more now. The penicillin of AI must be incorporated into hospitals and adopted for the benefit of the patient – and potentially the very existence of your hospital.