
About Us
We believe that through AI-enabled innovations, we can improve how we deliver patient care today with technologies that are safe, ethical, of high-quality, and fair.
The Duke Health AI Evaluation & Governance (E&G) Program is committed to supporting initiatives that ensure the reliable selection, development, deployment, and utilization of AI technologies. We strive to raise awareness within the community about the significance of high-quality and ethical AI practices. Our program fosters continuous advancement in research and framework development related to responsible health AI. We connect experts across Duke and engage in internal, national, and international collaborations. Leveraging Duke’s thought leadership and expertise, our team works alongside the Duke community and a multifaceted group of professionals, including researchers, physicians, nurses, data scientists, implementation scientists, and ethicists from Duke and partner institutions. Together, we develop robust new methodologies and frameworks for the oversight and evaluation of health AI algorithms.
History
The Duke Algorithm-Based Clinical Decision Support (ABCDS) Oversight Initiative emerged at a pivotal moment in the evolution of healthcare AI. In 2020, as AI and machine learning tools became increasingly integrated into clinical workflows, the regulatory landscape remained uncertain. The FDA had only issued draft guidance for Software as a Medical Device (SaMD), and no formal directives existed from other regulatory authorities. Recognizing the need for governance and oversight, Duke Health leadership mandated a structured approach to evaluating and managing AI technologies intended to be used at Duke Health. With support from both Duke University and Duke University Health System, this mandate led to the creation of the ABCDS Oversight governance framework modeled after medical device regulatory approval processes, ensuring that AI solutions deployed at Duke met rigorous standards for safety, fairness, and clinical efficacy.

As the ABCDS Oversight initiative matured, it became clear that the challenges Duke sought to address were not unique to its own health system. The lack of standardized best practices for AI governance and responsible AI practices was a nationwide issue, and Duke was well-positioned to help shape the conversation. This realization led to the formation of the Duke Health AI Evaluation & Governance Program, which expanded on the work of ABCDS to contribute to the broader health AI ecosystem to create new scalable frameworks and methodologies that ensure trustworthy and responsible health AI.
Duke’s leadership in AI governance helped catalyze the creation of national initiatives such as the Coalition for Health AI (CHAI) and the Trustworthy & Responsible AI Network (TRAIN). By publishing governance frameworks, methodologies, insights, and perspectives, Duke has played a critical role in defining best practices for implementing ethical and quality principles.
What began as an internal oversight effort has grown into a program influencing how AI is evaluated, deployed, and governed across the healthcare industry. Duke became one of the first healthcare organizations in the country to mandate algorithm evaluation and monitoring, setting a national benchmark. The Duke Health AI Evaluation & Governance Program continues to lead the way in ensuring that AI-driven healthcare solutions are safe, effective, and fair, both at Duke and beyond.
We Are Committed To:

Innovation, Integrity, and Excellence: Driving impactful health AI development and deployment while maintaining high-quality and ethical practices.
Robust Oversight: Establishing comprehensive oversight for health AI systems, including governance, evaluation, and monitoring, for responsible deployment and use.
Patient-Centered Care: Ensuring AI enhances patient outcomes while promoting safety, and patient-provider satisfaction.
“AI has the potential to revolutionize healthcare, but only if it is developed and implemented responsibly. At Duke Health, we are committed to setting the standard for trustworthy AI in medicine, ensuring that every algorithm enhances patient care, empowers clinicians, and upholds the highest ethical standards. Through the AI Evaluation & Governance Program, we are shaping a future where AI is not just a tool, but a reliable partner in advancing human health.”
— Michael Pencina, PhD, Chief Data Scientist, Duke Health
Our Goals
Governance & Oversight: Develop and continuously improve quality management and oversight structures, including the ABCDS Oversight program, to ensure the development, deployment, and use of trustworthy health AI.
Transparency & Trust: Cultivate trust among patients and providers by fostering transparency throughout all levels of the organization in the development, decision-making and ownership processes of AI systems.
Standardization & Reporting: Harmonize standard practices and reporting within the sphere of health AI to ensure safety, consistency, and reliability.
Safety & Efficacy Evaluation: Implement a rigorous evaluation process for all clinical algorithms before and after deployment to ensure their safety, effectiveness, and impact.
Ethical Deployment: Ensure that AI technologies are used ethically, prioritizing patient safety and data privacy.
Collaboration & Partnerships: Engage in internal, national, and global collaborations to advance research, framework development, and the adoption of responsible, safe, and fair AI practices in healthcare.
End User Education: Educate healthcare professionals and other end users on effective techniques for evaluating and utilizing AI technologies.
New Responsible AI Approaches: Foster collaboration, innovation, and education to advance responsible health AI by strengthening partnerships, streamlining evaluation and governance with automation, and empowering healthcare professionals.
