Scroll Top

The Agentic AI Tension at HIMSS26: Agents Everywhere, Validation Lacking 

Untitled design (2)

The hallways of HIMSS26 told two stories at once. On the main stage: a wave of enterprise AI announcements — Amazon’s expanding health cloud, Epic’s no-code agents, Microsoft Copilot’s expanding third-party clinical app ecosystem, and Google’s clinical AI partnerships. HIMSS26 represented the most concentrated convergence of enterprise-scale, production-ready AI agent announcements in the conference’s history. In the corridors between sessions: a quieter, more urgent conversation about how to stitch these seemingly disparate agents together — and who’s accountable when it goes wrong. 

The accountability gap is not abstract. A Black Book Research survey of 182 U.S. hospital leaders found that only 22% report high confidence in their ability to deliver a complete, auditable AI explanation to regulators or payers within 30 days (Black Book Research, “U.S. Hospitals Underfund AI Governance as Adoption Accelerates,” November 12, 2025). As STAT News reported from the HIMSS26 floor, AI agents are proliferating in health care faster than they can be counted — a pace that is outrunning the validation frameworks needed to govern them (STAT News, March 11, 2026). At the same conference where vendors were announcing their next generation of autonomous agents, the organizations being asked to deploy them couldn’t yet answer for the ones already running. 

That tension is not a PR problem. It is a strategic one — and it points to something deeper than technology selection or vendor preference. What follows is a framework for understanding where health systems actually are with AI, where the real gaps live, and a three-stage model for what comes next. 


The Agentic Acceleration: What’s Actually Happening 

Enterprise platforms are racing to embed AI agents across financial, operational, and patient-facing workflows. The solutions showcased at HIMSS26 were largely concentrated in high-volume, back-office use cases: prior authorization, clinical documentation and coding, denials management, patient outreach, and scheduling. These are not small bets — they represent meaningful automation of revenue cycle and care coordination functions that have historically required significant human labor. 

The operational efficiencies from early movers are beginning to validate the investment thesis. UC San Diego Health, deploying Amazon Connect Health across 3.2 million patient interactions annually, is saving one minute per call — redirecting 630 hours of staff time weekly from routine verification to direct patient assistance — while cutting call abandonment rates by 30%, and up to 60% in some departments. At Ochsner Health, patients used Epic’s Patient Engagement AI platform to reschedule more than 14,900 appointments autonomously, shifting staff capacity from transactional inquiries to higher-acuity patient needs. 

But even as the market accelerated adoption, providers pushed back. Validation is immature. Regulatory frameworks are lagging. And health system leaders — many of whom are already managing live agentic deployments — are increasingly wary of extending AI authority without clearer accountability structures. 

Most health systems today are somewhere in Stage 1: deploying purpose-built agents in bounded, high-volume workflows where the ROI case is clear and the blast radius of failure is manageable. The tools are arriving faster than the infrastructure to connect and govern them. That gap — between what’s being activated and what’s being orchestrated and governed — is where the real strategic work begins. Banner with link to the Presidio 2025 AI healthcare report


The Orchestration Problem: What No One Has Solved 

Beneath the high-visibility announcements, a more fundamental systems problem surfaced repeatedly. The challenge is not a shortage of agents — it is the absence of a governed coordination layer between them. 

A production-grade agentic architecture requires more than individual AI tools. It requires an LLM capable of interpretation and drafting, structured data validation, business rule evaluation, workflow routing and escalation logic, defined human checkpoints, and logged outputs for audit. Very few organizations have this stack fully integrated. What most have instead is a collection of point agents — purchased independently, from different vendors — that now need to interact with each other, the EHR, the claims system, and payer authorization panels simultaneously and in real time. 

Across our conversations with health system executives on the HIMSS26 floor, the language was operational — not architectural: 

“We’ve bought five AI tools and they don’t talk to each other.” 

— Health system CIO, HIMSS26 

Leaders described scenarios where RCM agents had no visibility into what the prior auth agent had already retrieved, and where agent failures left entire teams uncertain about who owned accountability for the outcome. 

“Our team has led the way with many agents developed throughout our health system — but the next step is to connect them all together.” 

— Chief Digital and Information Officer, HIMSS26 

Health systems don’t have an AI tools problem. They have a systems problem. The agents work. The seams between them don’t. Orchestration is the missing middle layer — and right now, every major platform vendor is racing to own it. 

This is Stage 2 of the maturity arc — and almost no one in the industry has built it yet. The question isn’t whether orchestration gets solved. It will be. The question is whether your organization defines the architecture — or inherits someone else’s. 


Runtime Governance: The Non-Negotiable Layer 

HIMSS26 governance sessions brought together clinicians, lawyers, data scientists, and executives to work through questions that are no longer theoretical: How do you audit an AI decision that contributed to a clinical outcome? Who is accountable when an automated workflow fails mid-process? 

The data on readiness and corresponding investment is sobering — and consistent across multiple sources. The median budget allocation for AI governance and safety sits at just 4.2% of the combined IT and Quality/Safety budget (Black Book Research, “U.S. Hospitals Underfund AI Governance as Adoption Accelerates,” November 12, 2025). A separate Healthcare Financial Management Association (HFMA) study reinforces the picture: while 88% of health systems report using AI internally, just 18% have a mature governance structure and a fully formed AI strategy in place (HFMA, August 2025). Hospitals are spending heavily on agents and an insufficient amount on the accountability infrastructure to govern them. 

Observability is not a finishing layer applied once everything else is built. For autonomous agents operating in clinical and financial workflows, runtime governance is a prerequisite — the mechanism by which an organization earns the right to extend AI authority further. 


A Maturity Model: Activate → Orchestrate → Govern 

Where are you today? Most health systems can place themselves somewhere on this spectrum — and the answer shapes every AI investment decision that follows.  

All three capabilities begin together and deepen as agent volume grows. None is a prerequisite for the others. 

Stage 1 — Activate: 

Deploy purpose-built agents in bounded, high-volume workflows with clear ROI — prior auth, documentation, denials, scheduling. This is where most health systems are today. The risk at this stage is accumulating a collection of unconnected point solutions that create new coordination burdens. UC San Diego Health and Ochsner Health are examples of organizations extracting real, measurable value at Stage 1 — but both would be the first to acknowledge that the harder work of connecting those agents into a coherent system lies ahead. 

Stage 2 — Orchestrate: 

Establish a governed integration and workflow layer that enables agents from different vendors to share context, hand off work, and escalate to humans with defined logic. Most health systems haven’t built this yet — and the hesitancy to extend agents further into clinical workflows is, in part, a rational response to not having it. A peer-reviewed study from Mount Sinai’s Icahn School of Medicine, published in npj Health Systems in March 2026, illustrates why: Under real clinical-scale workloads, single-agent accuracy collapsed from 73% to just 16% as task volume increased — while orchestrated multi-agent designs maintained consistent performance and used up to 65 times fewer computational resources. Health systems aren’t wrong to be cautious about clinical AI deployment. They’re wrong to think that caution alone is a strategy. Orchestration is what earns the right to go further. 

Stage 3 — Govern: 

Implement runtime observability, audit trails, bias monitoring, and change control — before agents are extended into higher-stakes clinical decisions. Governance is not the exit ramp from AI; it is the on-ramp to justified scale. Stage 3 is not aspirational. For most health systems, it is overdue. 

The sequence is clear. The question is where to start. 


What Leaders Should Do in the Next 90 Days 

The window for thoughtful sequencing is narrowing. As vendor pressure mounts and board-level AI expectations intensify, health system leaders risk making tool acquisition decisions that will be expensive to unwind. Here is what to do instead:

1. Audit your current agent inventory(Stage 1 — Activate)

List every AI agent or automated workflow your organization has deployed or contracted for — across clinical, operational, and revenue cycle functions. For each one, document: who owns it, what systems it touches, and how it hands off to the next step. If you can’t answer those questions in a week, you have a governance gap today.

2. Map the seams, not just the solutions(Stage 1 → Stage 2 transition)

The failure points in agentic architecture are almost never inside the agents themselves — they live in the handoffs between them. Executives (CIO, CMO, CDO and CFO) should come together in a room and walk through one end-to-end workflow that crosses multiple agents. Watch where it breaks. That’s your orchestration gap — and that’s where to invest next.

3. Assign ownership of the orchestration layer beforeinvesting inmore tools (Stage 2 — Orchestrate) 

Someone in your organization needs to own the architecture that coordinates your agents — not just individual tools. If that role doesn’t exist yet, create it or designate it: a Chief Automation Officer, an AI Architecture lead, or equivalent. Every additional agent you deploy without this increases your coordination debt.

4. Start your governance infrastructure before you think you need it (Stage 3 — Govern)

Stand up your audit trail and observability framework now — even if you only have Stage 1 agents deployed. Don’t wait until you’re scaling to discover you have no accountability infrastructure. The organizations that govern early are the ones that earn the right to scale fast.  

The organizations that will lead in agentic AI are not necessarily those that moved first. The leaders are the ones that moved deliberately — activating carefully, orchestrating intentionally, and governing continuously. That sequence is available to any health system willing to treat AI as infrastructure rather than a capital expense. 

Download “Unlocking Healthcare’s AI Potential” to hear directly from frontline health workers on today’s biggest tech gaps, and how to close them. 

Cabul Mehta

Industry Principal, Healthcare & Life Sciences at Presidio |  + posts
Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.