I found a hard copy of an old governance playbook I had created in my role as a chief data officer. It’s a detailed document filled with policies, flowcharts, and checklists that once guided how enterprises managed their data. Back then, that was how things worked. I defined the fields and traced the data between systems, making sure every rule was followed.
Governance back then was about sources of data, systems of records and data control. Data moved from point A to point B, and our rules followed. We built data glossaries, captured business rules and conducted validation checks for every field and mapped each transfer. Then we tracked changes to prove that information stayed contained to those who had role-based access to it, that it was HIPAA-compliant and protected throughout its lifecycle, and it was accurate and reliable from start to finish.
This process was linear and predictable, and for a long time, it worked. But that world depended on “system of record” reactive data. Once generative AI entered the equation, and proactive learning and creation took over, the rules started to fall apart.
Why AI Broke the Old Playbook
At its core, governance has always meant three things:
- defined roles and responsibilities
- defined processes and policies, and
- the tools to see what is or isn’t happening at any point in time
But AI is forcing enterprises to rethink what governance looks like. The moment data started generating new data, the old way stopped working. Large data systems that used to follow policy frameworks for how data was governed are now being leveraged to produce new insights, and those insights and learnings cannot be reviewed one checkpoint at a time.
Take large language models (LLMs) for example. They behave less like pipelines and more like living networks. They learn from every interaction. They adapt to context and find relationships that you can’t anticipate with older school data governance frameworks. Traditional governance was built for order and repetition, but generative AI thrives on variation.
That’s why so many organizations feel stuck. They understand the need for oversight, but they’re applying old frameworks to technologies that rewrite themselves with every iteration. They’re paralyzed with policy debates and internal governance committees, so much so that teams don’t know where to start. Their governance is outdated, and inertia is the result.
Three Basics of Modern AI Governance
To move forward, we need to rethink what we’re really governing. It must account for how models are built, where they run, and what data they learn from.
1. Inputs and outputs.
What goes into a model, and what comes out? People are already pasting sensitive information into prompts, without realizing that those inputs can be remembered and reused in ways they never intended. In healthcare, that might mean feeding patient records into a model to ask whether a drug is the right treatment. But unless a model has been certified by a clinician, it has no business recommending one therapy over another.
Models are designed to answer questions. That’s what makes them useful and risky. Without oversight, they’ll keep producing responses even when they shouldn’t. Governance on both ends matters: inputs determine what’s exposed, and outputs determine what gets acted on. Both need clear boundaries if AI is going to be used safely and accountably inside an enterprise.
2. Model hosting and data use.
Where do models run, and who controls them? A public model may be convenient to use, but every prompt can become new training data for someone else’s system. That risk alone makes most enterprises hesitate, and in healthcare, it’s a clear non-starter. Governance means understanding exactly where a model lives, who operates it, and whether its environment is private, shared, or third-party hosted.
Enterprises need to decide up front what’s acceptable. For most, shared or third-party hosting isn’t. Hosting is not a technical decision so much as it is a decision about ownership, privacy, and trust.
3. Data quality and representativeness.
When models train on enterprise data, they learn the patterns and biases hidden inside them. Which means they can boomerang on you. Training diabetes models on data from Northern California, for example, might make it perform well locally but fail completely on patients from Southern California, where diets, environments, and demographics differ.
Remember, bias isn’t always caused by intent. It often comes from context. Governance today means tracing those contexts and validating data sources, then confirming that a model’s “view of the world” matches the reality it’s being asked to navigate.
Letting AI Run (With Guardrails)
Because AI adapts and changes, you can’t write a single policy manual to cover its behavior. Governance for AI intelligence has to be flexible and continuous. It must evolve with the models themselves. That means moving away from rigid approvals toward living guardrails that allow for learning and adjustment.
In my experience, the only way to build that kind of framework is through experimentation. You put guardrails in place, test the system and see what breaks, then refine from there. Governance isn’t going to emerge from a policy document. It will come from trial, iteration, and the mechanics built around models we monitor and guide responsibly.
The goal isn’t to restrict AI. It’s to shape it — to teach it what good data looks like and how to operate safely inside an organization. I spent years caging data in large healthcare enterprises. Now we must learn how to coach intelligence.
Governance Belongs in the C-Suite
If there was ever a time for a new kind of governance leader, it’s now. The traditional chief data officer (CDO) role was built for an era of defined data sets and predictable rules. But now healthcare needs a new kind of CDO or perhaps the next wave of CIOs will come from the last wave of CDOs focused on how intelligence is created, trained, and applied across the enterprise.
In the next few years, I expect to see that role, and the teams around it, become standard across healthcare organizations. Governance should, and will, become an integral part of a healthcare organization’s data infrastructure and applications for safe and responsible AI.
I’ve retired my old playbook, but I still think about it often. It reminds me how far governance has come and how much it still matters. The playbook looks different now, but the mission is the same: keeping data, and now intelligence, safe to use.
I’m proud of where we started. I’m even more excited about where we’re going.

Fawad Butt
Fawad Butt is the Co-founder and CEO of Penguin Ai. He previously served as the Chief Data Officer (CDO) at Kaiser Permanente, UnitedHealthcare Group, and Optum, leading the industry’s largest team of data and analytics experts and managing a multi-hundred-million-dollar P&L.






