Editor’s Note: The author is Senior Partner at Lotis Blue
AI is advancing at a pace that defies intuition. Most people think in linear terms: steady, incremental progress. But AI is improving exponentially. The capabilities of AI models are increasing even as computing becomes faster and cheaper – and these forces compound each other. As Jensen Huang, CEO of Nvidia, noted in a recent interview, “Every new generation of AI is not just better; it is building the next generation.” Progress is literally layering on itself, as he points out that AI tools have become “100 times more powerful” in just two years.
This is why many describe the moment as a new Industrial Revolution. In the past, machines replaced physical labor. Today, AI is taking on tasks that once required human cognition and judgment. As models continue to advance, computation costs will decline, and applications will start to become more abundant and integrated into the foundations of daily life. Some estimates suggest that AI may soon play a dominant role in generating first-pass summaries, drafts, analyses, translations, and technical scaffolding that underpin modern knowledge work.
And by the latter half of the decade, the convergence of AI and robotics will reshape physical work as well, from logistics and pharmacy operations to elements of clinical workflow. AI agents will coordinate tasks, initiate next steps, and remove friction across complex clinical and administrative workflows and systems. It seems inevitable that autonomous vehicles will pick up patients at their homes, drive to appointments, and robots will take on the role of patient access specialists and care coordinators, both in and outside the home, in the not-too-distant future.
The question for health care is no longer whether AI will transform the workforce, but whether organizations will be ready for the speed of the shift already underway.
Health Care as an Industry Is Behind the Curve
Despite AI’s rapid evolution, health care has traditionally been slower to adopt it – particularly in clinical roles where the impact could be most significant. While new research shows that this may be starting to change, deeply risk-averse cultures, complex regulations, and fragmented data have created a protective posture that sometimes slows experimentation and rewards caution within the industry.
Abdelwanis and colleagues, in a recent Safety Science review, aptly captured this reality: “Organizational challenges such as infrastructure limitations, inadequate leadership support, and regulatory constraints remain significant barriers to AI adoption in clinical practice.”
Meanwhile, other industries have moved forward. A decade ago, self-driving cars were treated as implausible. Today, full self-driving capability demonstrates that iterative improvement, despite setbacks, can lead to meaningful autonomous performance. Tesla’s vehicle safety data shows that vehicles operating with Full Self-Driving experience substantially fewer collisions than national human-driving benchmarks. Progress didn’t come from avoiding risk; it came from learning through it.
Health care has struggled to build similar momentum – and for good reason. Although AI can already outperform humans in pattern recognition, summarization, and administrative processing, adoption is slowed by concerns about safety, changes to professional roles, unclear regulatory pathways, and in some cases, patient uneasiness in utilizing AI for care delivery. Payers also introduce administrative friction that could be alleviated by AI and automation. However, the deeper issue is structural. The industry must balance its commitment to patient safety while exploring the appropriate incentives and operating models necessary to accelerate responsible innovation around AI.
Changing the Narrative—From Fear to Elevating Purpose and Practice
To move forward, health care must shift its narrative about AI. Much of today’s discourse centers on risk. Will AI make mistakes? Will roles be diminished? Will the clinician’s craft be devalued? However, this framing overlooks the real opportunity: returning people to the purpose of their work, rather than the tasks that have accumulated around it.
Huang articulated this distinction clearly during a recent interview by arguing that jobs are built around a core purpose: creating value or addressing a human need. But over time, layers of tasks accumulate, documentation grows, and administrative work expands. Eventually, the mechanics of the job overshadow its purpose and the meaning that humans derive from it. AI’s real power, Huang suggested, is not in replacing people, but in stripping away everything that was never the point of the job in the first place.
To illustrate the idea, Huang revisited a widely cited prediction made nearly a decade ago. In 2016, Geoffrey Hinton, often referred to as the “godfather of AI,” warned that people should reconsider training as radiologists because AI would soon outperform humans in image recognition. At the time, the prediction fueled concerns that AI would render the profession obsolete entirely. The irony, Huang noted, is that the opposite has happened. The number of radiologists has increased, and today, nearly every radiologist utilizes AI in some capacity.
The explanation lies in returning to purpose. The purpose of a radiologist is not to study images for their own sake; it is to diagnose disease. Image analysis is a task in service of that goal. As AI has made image interpretation faster and more precise, radiologists have been able to read more studies, handle greater complexity, and support higher clinical volumes. Better productivity has improved economics for hospitals, which in turn has driven demand for more, not fewer, radiologists.
Recent workforce projections published in the Journal of the American College of Radiology suggest continued growth in the U.S. radiology profession over the coming decades. Furthermore, meaning and purpose, as evidenced by decades of research in the psychological literature, represent the highest-order drivers of engagement and joy from work.
The lesson extends well beyond radiology. Clinicians did not go into medicine to type notes, navigate prior-authorization portals, or click endlessly through EHR menus. These tasks are artifacts of the system, not expressions of clinical purpose. When AI automates documentation, coding, summarization, scheduling, pattern matching, and protocol retrieval, clinicians can operate more consistently at the top of their license – diagnosing, interpreting, communicating, and caring.
This shift is more than just cultural – it’s structural. AI becomes the first draft of everything. The assistant works ahead of the clinician, not behind. The system tracks what matters so humans can focus on what matters most.
What to Expect in 2026—How AI Will Reshape the Workforce
If recent years were marked by pilots and experimentation, 2026 will be the year AI becomes integrated into the everyday fabric of health care work. AI will also begin to show a step-change impact in health care by moving from information gathering and pattern recognition to reasoning and judgment. The shift will be apparent in current and new health care jobs, leadership expectations, care models, team structures, workforce strategies, learning programs, and daily workflows.
In 2026, the most visible clinical workforce impact will be in the administrative “time sinks” that divert clinicians away from patient care. Research examining physician workflow and time allocation found that documentation and administrative work consume nearly twice as much time as direct patient care.
The biggest shift is that AI will increasingly produce the first draft of clinical work (notes, summaries, and orders), while clinicians concentrate on higher-level tasks such as validation, interpretation, and decision-making.
Ambient technology will rapidly improve and listening and documentation will become mainstream. Evidence is already accumulating that ambient documentation technology is associated with reduced clinician burnout and improved well-being. In practice, this means physicians and APPs will spend less time in the EHR after hours and more time with patients (and with clinical reasoning and decision-making rather than administrative clerical work).
Decision support will expand from imaging into everyday care pathways. AI’s pattern-recognition advantage will continue to strengthen diagnostics and prioritization workflows. Radiology has demonstrated earlier proof points than other specialty areas, with AI tools increasingly supporting scan prioritization, detection, and, in some cases, workflow efficiency—augmenting clinicians rather than replacing them. The workforce effect is subtle but powerful: faster reads and better triage support more favorable outcomes, change staffing models, and raise demand for clinicians who can supervise and integrate AI outputs responsibly.
Nursing and care team workflows will start to be redesigned to automate repetitive tasks. The American Hospital Association highlights that automation can free meaningful portions of repetitive work and posits that GenAI can be a productivity lever in clinical operations – especially when leaders move beyond pilots into workflow redesign. In 2026, expect to see more virtual nursing, AI-assisted triage, and predictive tools that help teams anticipate patient deterioration, manage capacity, and coordinate follow-up care. This will support clinicians as they look to “top-of-license” work.
AI governance will also emerge as a core clinical competency. As predictive and generative tools spread, hospitals will formalize oversight, including accuracy evaluation, bias assessment, and post-implementation monitoring, because clinical leaders will be held accountable for safe performance in production, not just pilot success.
How AI Will Impact Non-Clinical and Administrative Work in 2026
In 2026, administrative functions are expected to see faster “hard ROI” adoption because the work is often rules-based, high-volume, and measurable. The change will not simply be efficiency, it will be job redesign. Specifically, fewer roles will be responsible for shepherding transactional workflows, and there will be more roles focused on handling exceptions, ensuring quality, and maintaining governance.
Contact centers and patient access will shift to AI-augmented service. The AHA points to real-world examples where GenAI-augmented call centers have reduced wait times and improved first-call resolution, a preview of 2026 gains that will begin to scale: fewer rote calls handled by humans, and more complex cases escalated to people with better context and tools.
Revenue cycle capabilities will move from “processing” to “exception handling.” Administrative teams will increasingly supervise automated drafting, sorting, and routing (including claims preparation, documentation support, and appeals packets), intervening when edge cases arise. The AHA also cites how AI-enabled appeals processes reduce handling time and misrouting, exactly the kind of measurable workflow where adoption tends to accelerate.
Clinical documentation integrity (CDI) and coding support are becoming increasingly reliant on AI-driven solutions. Expect CDI functions to lean more heavily on AI assistance and embedded guidance tools as systems push for accuracy and completeness at scale. CDI emphasizes scalable approaches to documentation accuracy and improvement, which are fertile ground for AI copilots that reduce manual lookup and standardize best practices.
AI-focused workforce capability-building will also become formalized programs, driven by collaboration with progressive HR leaders and executive leadership. 2026 is the year many organizations will standardize baseline AI literacy – especially in areas such as privacy, transparency, monitoring, and human-in-the-loop expectations. The responsible-use principles from the American Association of Medical Colleges underscore the broader direction: human-centered use, transparency, privacy protection, and ongoing evaluation – concepts that will increasingly appear in onboarding and role expectations, extending well beyond clinicians.
In a health system where clinical talent will always be in short supply for the foreseeable future, AI can be viewed as one way to accelerate the balance of labor supply and demand. It’s an opportunity to solve what continues to be the often-cited number one challenge in health care: access to clinicians that practice at the top of their license.

Aaron Sorensen, PhD
Aaron Sorensen, PhD, is a Senior Partner at Lotis Blue and serves as Head of the Executive Talent Management practice and leads the firm's Behavioral Science Center of Excellence. He has more than two decades of experience supporting leading organizations as they work to improve results and achieve new performance trajectories through their people, teams, and organizational structures. His work focuses on applying behavioral science and data-driven insights to executive assessment, leadership development, and team effectiveness. Aaron holds a Master's and Doctorate in Industrial/Organizational Psychology from DePaul University.






