How Health Systems Can Actually Scale AI — Without Burning Budget or Trust

Updated on October 12, 2025

For all the energy going into vendor demos and internal task forces, it’s still uncommon to find an AI deployment that’s made it past a department-level pilot or secured a stable place in next year’s budget.

It’s not that the technology is not working. In many cases, the underlying models are improving and the workflows are well-intentioned. But it’s increasingly clear the gap has less to do with innovation readiness and more to do with how these solutions are introduced, measured, and funded.

The path forward to making visible progress is solving operational problems that are already being measured, tied to budget, and owned by someone under pressure to improve.

That starts with reframing how the pilot is defined. Rather than leading with broad strategic goals or department-level interests, it may be better to start with a problem statement that finance or operations already owns, like denials that require manual review, discharge workflows that stall after rounding, and claims delayed by documentation gaps.

From there, use case selection matters more than many teams realize. It’s easy to default to something that sounds meaningful but can’t be measured cleanly. That’s where attribution can get fuzzy. The better approach is to start where the data is already flowing, like prior authorization queues, billing lag, or workforce scheduling. When baselines are already tracked, it’s far easier to quantify the impact of any new tool — and to compare it against the counterfactual.

That makes the next move more credible: aligning on measurement before the pilot begins. That means agreeing on what data will be pulled, when, and by whom — not after the fact, but as part of the pilot design. It also means naming how attribution will work. Will gains be measured in time saved, revenue captured, or costs avoided? And who signs off on whether the gain is real? That clarity is often missing, which can cause even a good result to be dismissed.

This step also tees up how the pilot evolves into a budgeted program. That leap is often where AI momentum stalls. The solution works, but the story doesn’t. A few champions say it’s promising, a department sees benefit, but nobody’s quite sure how to scale it, or who should pay for it. That’s why it helps to design every pilot as a case study in the making. Get permission to use the data. Capture feedback as you go. And build toward a narrative that’s easy for finance to tell back to leadership.

That narrative gets even stronger when the vendor contract reflects shared risk. That’s becoming increasingly expected — especially in large health systems. Performance-based pricing, milestone-driven fees, and deferred payments are all signals that a vendor believes its own ROI story. These models don’t just protect the budget; they also build trust.

Some real examples do emerge in operational workflows. OhioHealth used AI to improve inpatient discharge planning, reducing excess days and increasing throughput — reportedly saving over a million dollars within months. Jackson Health applied similar tools to remove thousands of avoidable inpatient days by surfacing real-time discharge barriers and prompting staff interventions. In the revenue cycle, Montage Health used AI to automate claims status follow-ups, reducing A/R days and freeing hundreds of staff hours. In each case, the common thread is structural rigor: clear problem definition, measurable baselines, and narratives framed in financial terms.

That matters because these kinds of operational wins tend to fund the longer-term bets. If you’re aiming for predictive clinical tools, population-level risk scoring, or care navigation support, you’ll need time, trust, and data rights to get there. Small, defensible wins help earn those.

So for buyers evaluating AI, the test isn’t just technical. It’s strategic. Can you isolate the problem? Can you measure the before-and-after? Can you attribute the change to the solution, and agree on who gets credit? If those answers aren’t clear up front, it’s worth slowing down before the pilot begins.

And for vendors, the shift is equally clear. It’s not enough to show what the tool does. You have to show where it lands in the budget. That means standing behind your attribution model, aligning incentives where possible, and helping the buyer tell the story internally — in their terms.

None of this requires reinventing the go-to-market. It just requires acknowledging how health systems actually buy.

AI’s role in healthcare won’t be decided in slide decks or at conferences. It’ll be decided in operating reviews and budget meetings. The solutions that scale will be the ones that know that going in.

Andy Strunk
Andy Strunk
Founder at Accretive Edge

Andy Strunk is the Founder of Accretive Edge.