Artificial intelligence (AI) governance is a tricky topic in the health care industry. Stakeholders must collaborate to set reasonable ground rules for the novel technology—but that transparency can carry risks, health tech leaders mused at a recent Newsweek event.
The webinar, "Health Care's AI Playbook: Building Safe, Smart and Scalable Systems," took place on May 20. An expert panel, including Dr. Brian Anderson (co-founder and CEO of the Coalition for Health AI); Danny Tobey (global co-chair of DLA Piper's AI and data analytics practice); Dr. Andreea Bodnari (CEO of Alignmt.AI) and Dr. Michael Pencina (vice dean and chief data scientist at Duke Health) spoke to and took questions from an audience of health care decision-makers.
Throughout the discussion, panelists acknowledged the limitations of universal standards for AI models. Performance can differ drastically from one organization to another alongside leadership priorities, frontline users and patient data.

One audience member posed the question: "How can we effectively gather AI deployment outcomes and highlight context-specific implementation best practices as part of a national AI outcomes registry?"
This project is underway at Anderson's Coalition for Health AI, or CHAI. In February, the nonprofit announced a partnership with Avanade to develop a public registry for its health AI applied model cards. These cards act as "nutrition labels" for AI tools, giving potential users insight into the technology's development and any known risks. The registry centralizes that information—creating an industry-wide database of information, applications and lessons learned.
The project is still in the early stages, Anderson said, but he hopes it will serve as a "post-market or post-deployment monitoring network" for CHAI's member organizations.
"We need to be able to understand how these models are actually performing locally and, importantly, [identify] variance in the performance from one population or one geography to another," Anderson said. For example, the data might show a model degrading over time—or, alternatively, producing consistent positive outcomes in an unexpected clinical specialty. Centralizing this information could allow CHAI and its members to spot trends, adding to the growing body of knowledge surrounding health AI applications.
"What we're trying to do in CHAI is create a public space where health systems can safely share that information, in terms of the best practices, of how to deploy and how to use [AI]," he continued.
Eventually, such a registry could catalyze action when new vulnerabilities are detected, added Bodnari of Alignmt.AI, similar to how the National Institute of Standards and Technology (NIST) flags vulnerabilities to certified enterprises.
"That is new, actionable information that you can take home" if you contribute to a public AI registry, Bodnari said. "If you've deployed an ambient AI tool and you receive a notification that there's a vulnerability on a specific patient population, that's something you have to act on right away."
Although Duke Health is a founding member of CHAI and an early collaborator on the registry, Pencina acknowledged the privacy concerns that could limit contributions. Vendors and health systems may hesitate to share certain information or share it with a large network: "There is the component of being willing to do it," he said.
Tobey, who has both an M.D. and J.D., looked at the question through a legal and medical lens. Since the health AI market is still largely unregulated, he suggested that there will have to be some legislative incentives for health systems to share data.
"When you create repositories, when you create reporting, there's risk to institutions to participating in that," Tobey said. "One very constructive role that government can play is to incentivize good behavior."
This could look like safe harbor or presumptions of prudence for organizations that participate in voluntary disclosures or registries, Tobey said.
Anderson agreed, sharing that CHAI is exploring an AI-specific patient safety outcomes registry to enhance protections for members.
"It's appropriate and important to call attention [to the fact] that health systems need these kinds of incentives and protections if we are to get this kind of transparency that we need," Anderson said, "particularly at this moment in time, when we are learning about some of the consequences of AI that we candidly don't know yet because these new emerging capabilities are coming at us every week."
Want to continue the conversation? Click here to apply for a complimentary pass to Newsweek's AI Impact Summit in Sonoma, California, from June 23 to 25.